The Full Wiki

More info on Curvilinear coordinates

Curvilinear coordinates: Wikis


Note: Many of our articles have direct quotes from sources you can cite, within the Wikipedia article! This article doesn't yet, but we're working on it! See more info or our list of citable articles.


From Wikipedia, the free encyclopedia

Curvilinear coordinates are a coordinate system for the Euclidean space based on some transformation that converts the standard Cartesian coordinate system to a coordinate system with the same number of coordinates in which the coordinate lines are curved. In the two-dimensional case, instead of Cartesian coordinates x and y, e.g., p and q are used: the level curves of p and q in the xy-plane. It is a requirement that the transformation is locally invertible (a one-to-one map) at each point. This means that one can convert a point given in one coordinate system to its curvilinear coordinates and back.

Depending on the application, a curvilinear coordinate system may be simpler to use than the Cartesian coordinate system. For instance, a physical problem with spherical symmetry defined in R3 (e.g., motion in the field of a point mass/charge), is usually easier to solve in spherical polar coordinates than in Cartesian coordinates. Also boundary conditions may enforce symmetry. One would describe the motion of a particle in a rectangular box in Cartesian coordinates, whereas one would prefer spherical coordinates for a particle in a sphere.

Many of the concepts in vector calculus, which are given in Cartesian or spherical polar coordinates, can be formulated in arbitrary curvilinear coordinates. This gives a certain economy of thought, as it is possible to derive general expressions—valid for any curvilinear coordinate system—for concepts as gradient, divergence, curl, and the Laplacian. Well-known examples of curvilinear systems are polar coordinates for R2, and cylinder and spherical polar coordinates for R3.

The name curvilinear coordinates, coined by the French mathematician Lamé, derives from the fact that the coordinate surfaces of the curvilinear systems are curved. While a Cartesian coordinate surface is a plane, e.g., z = 0 defines the x-y plane, the coordinate surface r = 1 in spherical polar coordinates is the surface of a unit sphere in R3—which obviously is curved.


General curvilinear coordinates

Fig. 1 - Coordinate surfaces, coordinate lines, and coordinate axes of general curvilinear coordinates.

In Cartesian coordinates, the position of a point P(x,y,z) is determined by the intersection of three mutually perpendicular planes, x = const, y = const, z = const. The coordinates x, y and z are related to three new quantities q1,q2, and q3 by the equations:

x = x(q1,q2,q3)     direct transformation
y = y(q1,q2,q3)     (curvilinear to Cartesian coordinates)
z = z(q1,q2,q3)

The above equation system can be solved for the arguments q1, q2, and q3 with solutions in the form:

q1 = q1(x, y, z)     inverse transformation
q2 = q2(x, y, z)     (Cartesian to curvilinear coordinates)
q3 = q3(x, y, z)

The transformation functions are such that there's a one-to-one relationship between points in the "old" and "new" coordinates, that is, those functions are bijections, and fulfil the following requirements within their domains:

1) They are smooth functions
2) The Jacobian determinant
 {\partial(q_1, q_2, q_3) \over \partial(x, y, z)} =\begin{vmatrix} \frac{\partial q_1}{\partial x} & \frac{\partial q_2}{\partial x} & \frac{\partial q_3}{\partial x} \\ \frac{\partial q_1}{\partial y} & \frac{\partial q_2}{\partial y} & \frac{\partial q_3}{\partial y} \\ \frac{\partial q_1}{\partial z} & \frac{\partial q_2}{\partial z} & \frac{\partial q_3}{\partial z} \end{vmatrix} \neq 0

is not zero; that is, the transformation is invertible according to the inverse function theorem. The condition that the Jacobian determinant is not zero reflects the fact that three surfaces from different families intersect in one and only one point and thus determine the position of this point in a unique way.[1]

A given point may be described by specifying either x, y, z or q1, q2, q3 while each of the inverse equations describes a surface in the new coordinates and the intersection of three such surfaces locates the point in the three-dimensional space (Fig. 1). The surfaces q1 = const, q2 = const, q3 = const are called the coordinate surfaces; the space curves formed by their intersection in pairs are called the coordinate lines. The coordinate axes are determined by the tangents to the coordinate lines at the intersection of three surfaces. They are not in general fixed directions in space, as is true for simple Cartesian coordinates. The quantities (q1, q2, q3 ) are the curvilinear coordinates of a point P(q1, q2, q3 ).

In general, (q1, q2 ... qn ) are curvilinear coordinates in n-dimensional space.


Curvilinear Coordinates from a Mathematical Perspective

From a more general and abstract perspective, a curvilinear coordinate system is simply a coordinate patch on the differential manifold En (n-dimensional Euclidian space) that is diffeomorphic to the Cartesian coordinate patch on the manifold.[2] Note that two diffeomorphic coordinate patches on a differential manifold need not overlap differentiably. With this simple definition of a curvilinear coordinate system, all the results that follow below are simply applications of standard theorems in differential topology.

Example: Spherical coordinates

Fig. 2 - Coordinate surfaces, coordinate lines, and coordinate axes of spherical coordinates. Surfaces: r - spheres, θ - cones, φ - half-planes; Lines: r - straight beams, θ - vertical semicircles, φ - horizontal circles; Axes: r - straight beams, θ - tangents to vertical semicircles, φ - tangents to horizontal circles

Spherical coordinates are one of the most used curvilinear coordinate systems in such fields as Earth sciences, cartography, and physics (quantum physics, relativity, etc.). The curvilinear coordinates (q1, q2, q3) in this system are, respectively, r (radial distance or polar radius, r ≥ 0), θ (zenith or latitude, 0 ≤ θ ≤ 180°), and φ (azimuth or longitude, 0 ≤ φ ≤ 360°). The direct relationship between Cartesian and spherical coordinates is given by:

 \begin{align} x & = r \sin\theta \cos\phi \ y & = r \sin\theta \sin\phi \ z & = r \cos\theta \end{align}

Solving the above equation system for r, θ, and φ gives the inverse relations between spherical and Cartesian coordinates:

 \begin{align} r & =\sqrt{x^2 + y^2 + z^2} \ \theta & =\arccos \left( {\frac{z}{{\sqrt {x^2 + y^2 + z^2 } }}} \right) \ \varphi & =\arctan \left( {\frac{y}{x}} \right) \end{align}

The respective spherical coordinate surfaces are derived in terms of Cartesian coordinates by fixing the spherical coordinates in the above inverse transformations to a constant value. Thus (Fig.2), r = const are concentric spherical surfaces centered at the origin, O, of the Cartesian coordinates, θ = const are circular conical surfaces with apex in O and axis the Oz axis, φ = const are half-planes bounded by the Oz axis and perpendicular to the xOy Cartesian coordinate plane. Each spherical coordinate line is formed at the pairwise intersection of the surfaces, corresponding to the other two coordinates: r lines (radial distance) are beams Or at the intersection of the cones θ = const and the half-planes φ = const; θ lines (meridians) are semicircles formed by the intersection of the spheres r = const and the half-planes φ = const ; and φ lines (parallels) are circles in planes parallel to xOy at the intersection of the spheres r = const and the cones θ = const. The location of a point P(r,θ,φ) is determined by the point of intersection of the three coordinate surfaces, or, alternatively, by the point of intersection of the three coordinate lines. The θ and φ axes in P(r,θ,φ) are the mutually perpendicular (orthogonal) tangents to the meridian and parallel of this point, while the r axis is directed along the radial distance and is orthogonal to both θ and φ axes.

The surfaces described by the inverse transformations are smooth functions within their defined domains. The Jacobian (functional determinant) of the inverse transformations is:

J^{-1} =\frac{\partial(r,\theta,\phi)}{\partial(x,y,z)} =\begin{vmatrix} \sin\theta\cos\phi & \sin\theta\sin\phi & \cos\theta\ \frac{1}{r}\cos\theta\cos\phi & \frac{1}{r}\cos\theta\sin\phi & -\frac{1}{r}\sin\theta \ -\frac{1}{r}\frac{\sin\phi}{\sin\theta} & \frac{1}{r}\frac{\cos\phi}{\sin\theta} & 0 \end{vmatrix} = \frac{1}{r^2 \sin{\theta}} \neq 0.

Curvilinear local basis

Coordinates are used to define location or distribution of physical quantities which are scalars, vectors, or tensors. Scalars are expressed as points and their location is defined by specifying their coordinates with the use of coordinate lines or coordinate surfaces. Vectors are objects that possess two characteristics: magnitude and direction.

The concept of a basis

To define a vector in terms of coordinates, an additional coordinate-associated structure, called basis, is needed. A basis in three-dimensional space is a set of three linearly independent vectors {e1, e2, e3}, called basis vectors. Each basis vector is associated with a coordinate in the respective dimension. Any vector can be represented as a sum of vectors Anen formed by multiplication of a basis vector by a scalar coefficient, called component. Each vector, then, has exactly one component in each dimension and can be represented by the vector sum: A = A1e1 + A2e2 + A3e3, where An and en are the respective components and basis vectors. A requirement for the coordinate system and its basis is that A1e1 + A2e2 + A3e3 ≠ 0 when at least one of the An ≠ 0. This condition is called linear independence. Linear independence implies that there cannot exist bases with basis vectors of zero magnitude because the latter will give zero-magnitude vectors when multiplied by any component. Non-coplanar vectors are linearly independent, and any triple of non-coplanar vectors can serve as a basis in three dimensions.

Basis vectors in curvilinear coordinates

For the general curvilinear coordinates, basis vectors and components vary from point to point. If vector A whose origin is in point P (q1, q2, q3 ) is moved to point P' (q'1, q'2, q'3 ) in such a way that its direction and orientation are preserved, then the moved vector will be expressed by new components A'n and basis vectors e'n. Therefore, the vector sum that describes vector A in the new location is composed of different vectors, although the sum itself remains the same. A coordinate basis whose basis vectors change their direction and/or magnitude from point to point is called local basis. All bases associated with curvilinear coordinates are necessarily local. Global bases, that is, bases composed of basis vectors that are the same in all points can be associated only with linear coordinates. A more exact, though seldom used, expression for such vector sums with local basis vectors is \mathbf{A} = \textstyle \sum_{i=1}^n A_i(q_1\ldots q_n)\mathbf{e}_i(q_1\ldots q_n), where the dependence of both components and basis vector on location is made explicit (n is the number of dimensions). Local bases are composed of vectors with arbitrary order, magnitude, and direction and magnitude/direction vary in different points in space.

Choosing an appropriate basis

Basis vectors are usually associated with a coordinate system by two methods:

  • they can be built along the coordinate axes (colinear with axes) or
  • they can be built to be perpendicular (normal) to the coordinate surfaces.

In the first case (axis-collinear), basis vectors transform like covariant vectors while in the second case (normal to coordinate surfaces), basis vectors transform like contravariant vectors. Those two types of basis vectors are distinguished by the position of their indices: covariant vectors are designated with lower indices while contravariant vectors are designated with upper indices. Thus, depending on the method by which they are built, for a general curvilinear coordinate system there are two sets of basis vectors for every point: {e1, e2, e3} is the covariant basis, and {e1, e2, e3} is the contravariant basis.

Covariant and contravariant bases

A key property of the vector and tensor representation in terms of indexed components and basis vectors is invariance in the sense that vector components which transform in a covariant manner (or contravariant manner) are paired with basis vectors that transform in a contravariant manner (or covariant manner), and these operations are inverse to one another according to the transformation rules. This means that in terms, in which an index occurs two times, one of the indices in the pair must be upper and the other index must be lower. Thus in the above vector sums, basis vectors with lower indices are multiplied by components with upper indices or vice versa, so that a given vector can be represented in two ways: A = A1e1 + A2e2 + A3e3 = A1e1 + A2e2 + A3e3. Upon coordinate change, a vector transforms in the same way as its components. Therefore, a vector is covariant or contravariant if, respectively, its components are covariant or contravariant. From the above vector sums, it can be seen that contravariant vectors are represented with covariant basis vectors, and covariant vectors are represented with contravariant basis vectors. This is reflected in the Einstein summation convention according to which in the vector sums \textstyle \sum_{i=1}^n A^i \mathbf{e}_i and \textstyle \sum_{i=1}^n A_i \mathbf{e}^i the basis vectors and the summation symbols are omitted, leaving only Ai and Ai which represent, respectively, a contravariant and a covariant vector.

Covariant basis

As stated above, contravariant vectors are vectors with contravariant components whose location is determined using covariant basis vectors that are built along the coordinate axes. In analogy to the other coordinate elements, transformation of the covariant basis of general curvilinear coordinates is described starting from the Cartesian coordinate system whose basis is called standard basis. The standard basis is a global basis that is composed of 3 mutually orthogonal vectors {i, j, k} of unit length, that is, the magnitude of each basis vector equals 1. Regardless of the method of building the basis (axis-collinear or normal to coordinate surfaces), in the Cartesian system the result is a single set of basis vectors, namely, the standard basis. To avoid misunderstanding, in this section the standard basis will be thought of as built along the coordinate axes.

Constructing a covariant basis in one dimension
Fig. 3 - Transformation of local covariant basis in the case of general curvilinear coordinates

Consider the one-dimensional curve shown in Fig. 3. At point P, taken as an origin, x is one of the Cartesian coordinates, and q1 is one of the curvilinear coordinates (Fig. 3). The local basis vector is e1 and it is built on the q1 axis which is a tangent to q1 coordinate line at the point P. The axis q1 and thus the vector e1 form an angle α with the Cartesian x axis and the Cartesian basis vector i.

It can be seen from triangle PAB that  \cos \alpha = \tfrac{|\mathbf{i}|}{|\mathbf{e}_1|} where |e1| is the magnitude of the basis vector e1 (the scalar intercept PB) and |i| is the magnitude of the Cartesian basis vector i which is also the projection of e1 on the x axis (the scalar intercept PA). It follows, then, that

|\mathbf{e}_1| = \cfrac{|\mathbf{i}|}{\cos \alpha} and |\mathbf{i}| = |\mathbf{e}_1|\cos \alpha.

However, this method for basis vector transformations using directional cosines is inapplicable to curvilinear coordinates for the following reason: By increasing the distance from P, the angle between the curved line q1 and Cartesian axis x increasingly deviates from α. At the distance PB the true angle is that which the tangent at point C forms with the x axis and the latter angle is clearly different from α. The angles that the q1 line and q1 axis form with the x axis become closer in value the closer one moves towards point P and become exactly equal at P. Let point E is located very close to P, so close that the distance PE is infinitesimally small. Then PE measured on the q1 axis almost coincides with PE measured on the q1 line. At the same time, the ratio \tfrac{PD}{PE} (PD being the projection of PE on the x axis) becomes almost exactly equal to cos α.

Let the infinitesimally small intercepts PD and PE be labelled, respectively, as dx and dq1. Then

\cos \alpha = \cfrac{dx}{dq_1} and \cfrac{1}{\cos \alpha} = \cfrac{dq_1}{dx}.

Thus, the directional cosines can be substituted in transformations with the more exact ratios between infinitesimally small coordinate intercepts.

If q_1 \equiv q_1(x,y,z) and x \equiv x(q_1,q_2,q_3) are smooth (continuously differentiable) functions and, therefore, the transformation ratios can be written as

\cfrac{dq_1}{dx} = \cfrac{dq_1(x,y,z)}{dx} = \cfrac{\partial q_1}{\partial x} and \cfrac{dx}{dq_1} = \cfrac{dx(q_1,q_1,q_3)}{dq_1} = \cfrac{\partial x}{\partial q_1},

that is, those ratios are partial derivatives of coordinates belonging to one system with respect to coordinates belonging to the other system.

From the foregoing discussion, it follows that the component (projection) of e1 on the x axis is

x = \cfrac{|\mathbf{i}|}{|\mathbf{e}_1|}.|\mathbf{e}_1| = \cos \alpha.|\mathbf{e}_1| = \cfrac{\partial x}{\partial q_1}.|\mathbf{e}_1|.

Therefore the projection of the normalized local basis vector (|e1| = 1) can be made a vector directed along the x axis by multiplying it with the standard basis vector i.

Constructing a covariant basis in three dimensions

Doing the same for the coordinates in the other 2 dimensions, e1 can be expressed as: \mathbf{e}_1 = \tfrac{\partial x}{\partial q_1} \mathbf{i} + \tfrac{\partial y}{\partial q_1} \mathbf{j} + \tfrac{\partial z}{\partial q_1} \mathbf{k}. Similar equations hold for e2 and e3 so that the standard basis {i, j, k} is transformed to local (ordered and normalised) basis {e1, e2, e3} by the following system of equations:

\begin{cases} \tfrac{\partial x}{\partial q_1} \mathbf{i} + \tfrac{\partial y}{\partial q_1} \mathbf{j} + \tfrac{\partial z}{\partial q_1} \mathbf{k} = \mathbf{e}_1 \ \tfrac{\partial x}{\partial q_2} \mathbf{i} + \tfrac{\partial y}{\partial q_2} \mathbf{j} + \tfrac{\partial z}{\partial q_2} \mathbf{k} = \mathbf{e}_2 \ \tfrac{\partial x}{\partial q_3} \mathbf{i} + \tfrac{\partial y}{\partial q_3} \mathbf{j} + \tfrac{\partial z}{\partial q_3} \mathbf{k} = \mathbf{e}_3 \end{cases}

Vectors e1, e2, and e3 at the right hand side of the above equation system are unit vectors (magnitude = 1) directed along the 3 axes of the curvilinear coordinate system. However, basis vectors in general curvilinear system are not required to be of unit length: they can be of arbitrary magnitude and direction. It can easily be shown that the condition |e1| = |e2| = |e3| = 1 is a result of the above transformation, and not an a priori requirement imposed on the curvilinear basis. Let the local basis {e1, e2, e3} not be normalised, in effect, leaving the basis vectors with arbitrary magnitudes. Then, instead of e1, e2, and e3 in the right hand side, there will be \tfrac{\mathbf{e}_1}{|\mathbf{e}_1|}, \tfrac{\mathbf{e}_2}{|\mathbf{e}_2|}, and \tfrac{\mathbf{e}_3}{|\mathbf{e}_3|} which are again unit vectors directed along the curvilinear coordinate axes.

By analogous reasoning, but this time projecting the standard basis on the curvilinear axes ( |i| = |j| = |k| = 1 according to the definition of standard basis), one can obtain the inverse transformation from local basis to standard basis:

\begin{cases} \tfrac{\partial q_1}{\partial x} \mathbf{e}_1 + \tfrac{\partial q_2}{\partial x} \mathbf{e}_2 + \tfrac{\partial q_3}{\partial x} \mathbf{e}_3 = \mathbf{i} \ \tfrac{\partial q_1}{\partial y} \mathbf{e}_1 + \tfrac{\partial q_2}{\partial y} \mathbf{e}_2 + \tfrac{\partial q_3}{\partial y} \mathbf{e}_3 = \mathbf{j} \ \tfrac{\partial q_1}{\partial z} \mathbf{e}_1 + \tfrac{\partial q_2}{\partial z} \mathbf{e}_2 + \tfrac{\partial q_3}{\partial z} \mathbf{e}_3 = \mathbf{k} \end{cases}

Transformation between curvilinear and Cartesian coordinates

The above systems of linear equations can be written in matrix form as \tfrac{\partial x_i}{\partial q_k} \mathbf{i}_i = \mathbf{e}_k and \tfrac{\partial q_i}{\partial x_k} \mathbf{e}_i = \mathbf{i}_k where xi (i = 1,2,3) are the Cartesian coordinates x, y, z and ii are the standard basis vectors i, j, k. The system matrices (that is, matrices composed of the coefficients in front of the unknowns) are, respectively, \tfrac{\partial x_i}{\partial q_k} and \tfrac{\partial q_i}{\partial x_k}. At the same time, those two matrices are the Jacobian matrices Jik and J−1ik of the transformations of basis vectors from curvilinear to Cartesian coordinates and vice versa. In the second equation system (the inverse transformation), the unknowns are the curvilinear basis vectors which are subject to the condition that in each point of the curvilinear coordinate system there must exist one and only one set of basis vectors. This condition is satisfied iff (if and only if) the equation system has a single solution.

From linear algebra, it is known that a linear equation system has a single solution only if the determinant of its system matrix is non-zero. For the second equation system, the determinant of the system matrix is  \det{J^{-1}_{ik}} = J^{-1} = \tfrac{\partial(q_1, q_2, q_3)}{\partial(x, y, z)} \neq 0 which shows the rationale behind the above requirement concerning the inverse Jacobian determinant.

Another, very important, feature of the above transformations is the nature of the derivatives: in front of the Cartesian basis vectors stand derivatives of Cartesian coordinates while in front of the curvilinear basis vectors stand derivatives of curvililear coordinates. In general, the following definition holds:

Covariant vector is an object that in the system of coordinates x is defined by n ordered numbers or functions (components) ai(x1, x2, x3) and in system q it is defined by n ordered components āi(q1, q2, q3) which are connected with ai (x1, x2, x3) in each point of space by the transformation: \bar{a}_k = \tfrac{\partial x^i}{\partial q^k} a_i.

Mnemonic: Coordinates co-vary with the vector.

This definition is so general that it applies to covariance in the very abstract sense, and includes not only basis vectors, but also all vectors, components, tensors, pseudovectors, and pseudotensors (in the last two there is an additional sign flip). It also serves to define tensors in one of their most usual treatments.

Lamé coefficients

The partial derivative coefficients through which vector transformation is achieved are called also scale factors or Lamé coefficients (named after Gabriel Lamé): h_{ik} = \tfrac{\partial x^i}{\partial q^k}. However, the hik designation is very rarely used, being largely replaced with √gik, the components of the metric tensor.

Vector and tensor algebra in three-dimensional curvilinear coordinates

Note: the Einstein summation convention of summing on repeated indices is used below.

Vectors in curvilinear coordinatess

Let (\mathbf{g}_1, \mathbf{g}_2, \mathbf{g}_3) be an arbitrary basis for three-dimensional Euclidean space. In general, the basis vectors are neither unit vectors nor mutually orthogonal. However, they are required to be linearly independent. Then a vector \mathbf{v} can be expressed as

 \mathbf{v} = v^k~\mathbf{g}_k

The components vk are the contravariant components of the vector \mathbf{v}.

The reciprocal basis (\mathbf{g}^1, \mathbf{g}^2, \mathbf{g}^3) is defined by the relation

 \mathbf{g}^i\cdot\mathbf{g}_j = \delta^i_j

where \delta^i_j is the Kronecker delta.

The vector \mathbf{v} can also be expressed in terms of the reciprocal basis:

 \mathbf{v} = v_k~\mathbf{g}^k

The components vk are the covariant components of the vector \mathbf{v}.

Relations between components and basis vectors

From these definitions we can see that

 \mathbf{v}\cdot\mathbf{g}^i = v^k~\mathbf{g}_k\cdot\mathbf{g}^i = v^k~\delta^i_k = v^i
 \mathbf{v}\cdot\mathbf{g}_i = v_k~\mathbf{g}^k\cdot\mathbf{g}_i = v_k~\delta_i^k = v_i


 \mathbf{v}\cdot\mathbf{g}_i = v^k~\mathbf{g}_k\cdot\mathbf{g}_i = g_{ki}~v^k
 \mathbf{v}\cdot\mathbf{g}^i = v_k~\mathbf{g}^k\cdot\mathbf{g}^i = g^{ki}~v_k

Metric tensor

The quantities gij,g ij are defined as

 g_{ij} = \mathbf{g}_i \cdot \mathbf{g}_j = g_{ji} ~;~~ g^{ij} = \mathbf{g}^i \cdot \mathbf{g}^j = g^{ji}

From the above equations we have

 v^i = g^{ik}~v_k ~;~~ v_i = g_{ik}~v^k ~;~~ \mathbf{g}^i = g^{ij}~\mathbf{g}_j ~;~~ \mathbf{g}_i = g_{ij}~\mathbf{g}^j

Identity map

The identity map \mathsf{I} defined by \mathsf{I}\cdot\mathbf{v} = \mathbf{v} can be shown to be

 \mathsf{I} = g^{ij}~\mathbf{g}_i\otimes\mathbf{g}_j = g^{ij}~\mathbf{g}^i\otimes\mathbf{g}^j = \mathbf{g}_i\otimes\mathbf{g}^i = \mathbf{g}^i\otimes\mathbf{g}_i

Scalar product

The scalar product of two vectors in curvilinear coordinates is

 \mathbf{u}\cdot\mathbf{v} = u^i~v_i = u_i~v^i = g_{ij}~u^i~v^j = g^{ij}~u_i~v_j

Second-order tensors in curvilinear coordinates

A second-order tensor can be expressed as

 \boldsymbol{S} = S^{ij}~\mathbf{g}_i\otimes\mathbf{g}_j = S^{i}_{~j}~\mathbf{g}_i\otimes\mathbf{g}^j = S_{i}^{~j}~\mathbf{g}^i\otimes\mathbf{g}_j = S_{ij}~\mathbf{g}^i\otimes\mathbf{g}^j

The components S^{ij}\, are called the contravariant components, S^{i}_{~j} the mixed right-covariant components, S_{i}^{~j} the mixed left-covariant components, and S_{ij}\, the covariant components of the second-order tensor.

Relations between components

The components of the second-order tensor are related by

 S^{ij} = g^{ik}~S_k^{~j} = g^{jk}~S^i_{~k} = g^{ik}~g^{jl}~S_{kl}

Action of a second-order tensor on a vector

The action \mathbf{v} = \boldsymbol{S}\cdot\mathbf{u} can be expressed in curvilinear coordinates as

 v^i~\mathbf{g}_i = S^{ij}~u_j~\mathbf{g}_i = S^i_{~j}~u^j~\mathbf{g}_i ~;\qquad v_i~\mathbf{g}^i = S_{ij}~u^i~\mathbf{g}^i = S_{i}^{~j}~u_j~\mathbf{g}^i

Inner product of two second-order tensors

The inner product of two second-order tensors \boldsymbol{U} = \boldsymbol{S}\cdot\boldsymbol{T} can be expressed in curvilinear coordinates as

 U_{ij}~\mathbf{g}^i\otimes\mathbf{g}^j = S_{ik}~T^k_{.~j} ~\mathbf{g}^i\otimes\mathbf{g}^j= S_i^{.~k}~T_{kj}~\mathbf{g}^i\otimes\mathbf{g}^j


 \boldsymbol{U} = S^{ij}~T^m_{.~n}~g_{jm}~\mathbf{g}_i\otimes\mathbf{g}^n = S^i_{.~m}~T^m_{.~n}~\mathbf{g}_i\otimes\mathbf{g}^n = S^{ij}~T_{jn}~\mathbf{g}_i\otimes\mathbf{g}^n

Determinant of a second-order tensor

If \boldsymbol{S} is a second-order tensor, then the determinant is defined by the relation

 \left[\boldsymbol{S}\cdot\mathbf{u}, \boldsymbol{S}\cdot\mathbf{v}, \boldsymbol{S}\cdot\mathbf{w}\right] = \det\boldsymbol{S}\left[\mathbf{u}, \mathbf{v}, \mathbf{w}\right]

where \mathbf{u}, \mathbf{v}, \mathbf{w} are arbitrary vectors and

 \left[\mathbf{u},\mathbf{v},\mathbf{w}\right] := \mathbf{u}\cdot(\mathbf{v}\times\mathbf{w})~.

Vector and tensor calculus in three-dimensional curvilinear coordinates

Note: the Einstein summation convention of summing on repeated indices is used below.

Let the position of a point in space be characterized by three coordinate variables 123). The coordinate curve ξ1 represents a surface on which ξ23 are constant. Let \mathbf{x} be the position vector of the point relative to some origin. Then, assuming that such a mapping and its inverse exist and are continuous, we can write [3]

 \mathbf{x} = \boldsymbol{\varphi}(\xi^1, \xi^2, \xi^3) ~;~~ \xi^i = \psi^i(\mathbf{x}) = [\boldsymbol{\varphi}^{-1}(\mathbf{x})]^i

The fields \psi^i(\mathbf{x}) are called the curvilinear coordinate functions of the curvilinear coordinate system \boldsymbol{\psi}(\mathbf{x}) = \boldsymbol{\varphi}^{-1}(\mathbf{x}).

The ξi coordinate curves are defined by the one-parameter family of functions given by

 \mathbf{x}_i(\alpha) = \boldsymbol{\varphi}(\alpha, \xi^j, \xi^k) ~,~~ i\ne j \ne k

with ξjk fixed.

Tangent vector to coordinate curves

The tangent vector to the curve \mathbf{x}_i at the point \mathbf{x}_i(\alpha) (or to the coordinate curve ξi at the point \mathbf{x}) is

 \cfrac{\rm{d}\mathbf{x}_i}{\rm{d}\alpha} \equiv \cfrac{\partial\mathbf{x}}{\partial \xi^i}

Gradient of a scalar field

Let f(\mathbf{x}) be a scalar field in space. Then

 f(\mathbf{x}) = f[\boldsymbol{\varphi}(\xi^1,\xi^2,\xi^3)] = f_\varphi(\xi^1,\xi^2,\xi^3)

The gradient of the field f is defined by

 [\boldsymbol{\nabla}f(\mathbf{x})]\cdot\mathbf{c} = \cfrac{\rm{d}}{\rm{d}\alpha} f(\mathbf{x}+\alpha\mathbf{c})\biggr|_{\alpha=0}

where \mathbf{c} is an arbitrary constant vector. If we define the components ci of vector \mathbf{c} such that

 \xi^i + \alpha~c^i = \psi^i(\mathbf{x} + \alpha~\mathbf{c})


 [\boldsymbol{\nabla}f(\mathbf{x})]\cdot\mathbf{c} = \cfrac{\rm{d}}{\rm{d}\alpha} f_\varphi(\xi^1 + \alpha~c^1, \xi^2 + \alpha~c^2, \xi^3 + \alpha~c^3)\biggr|_{\alpha=0} = \cfrac{\partial f_\varphi}{\partial \xi^i}~c^i = \cfrac{\partial f}{\partial \xi^i}~c^i

If we set f(\mathbf{x}) = \psi^i(\mathbf{x}), then since \xi^i = \psi^i(\mathbf{x}), we have

 [\boldsymbol{\nabla}\psi^i(\mathbf{x})]\cdot\mathbf{c} = \cfrac{\partial \psi^i}{\partial \xi^j}~c^j = c^i

which provides a means of extracting the contravariant component of a vector \mathbf{c}.

If \mathbf{g}_i is the covariant (or natural) basis at a point, and if \mathbf{g}^i is the contravariant (or reciprocal) basis at that point, then

 [\boldsymbol{\nabla}f(\mathbf{x})]\cdot\mathbf{c} = \cfrac{\partial f}{\partial \xi^i}~c^i = \left(\cfrac{\partial f}{\partial \xi^i}~\mathbf{g}^i\right) \left(c^i~\mathbf{g}_i\right) \quad \implies \quad \boldsymbol{\nabla}f(\mathbf{x}) = \cfrac{\partial f}{\partial \xi^i}~\mathbf{g}^i

A brief rationale for this choice of basis is given in the next section.

Gradient of a vector field

A similar process can be used to arrive at the gradient of a vector field \mathbf{f}(\mathbf{x}). The gradient is given by

 [\boldsymbol{\nabla}\mathbf{f}(\mathbf{x})]\cdot\mathbf{c} = \cfrac{\partial \mathbf{f}}{\partial \xi^i}~c^i

If we consider the gradient of the position vector field \mathbf{r}(\mathbf{x}) = \mathbf{x}, then we can show that

 \mathbf{c} = \cfrac{\partial\mathbf{x}}{\partial \xi^i}~c^i = \mathbf{g}_i(\mathbf{x})~c^i ~;~~ \mathbf{g}_i(\mathbf{x}) := \cfrac{\partial\mathbf{x}}{\partial \xi^i}

The vector field \mathbf{g}_i is tangent to the ξi coordinate curve and forms a natural basis at each point on the curve. This basis, as discussed at the beginning of this article, is also called the covariant curvilinear basis. We can also define a reciprocal basis, or contravariant curvilinear basis, \mathbf{g}^i. All the algebraic relations between the basis vectors, as discussed in the section on tensor algebra, apply for the natural basis and its reciprocal at each point \mathbf{x}.

Since \mathbf{c} is arbitrary, we can write

 \boldsymbol{\nabla}\mathbf{f}(\mathbf{x}) = \cfrac{\partial \mathbf{f}}{\partial \xi^i}\otimes\mathbf{g}^i

Note that the contravariant basis vector \mathbf{g}^i is perpendicular to the surface of constant ψi and is given by

 \mathbf{g}^i = \boldsymbol{\nabla}\psi^i

Christoffel symbols of the second kind

The Christoffel symbols of the second kind is defined as

 \Gamma_{ij}^k = \Gamma_{ji}^k \qquad \qquad \mbox{such that} \qquad \cfrac{\partial \mathbf{g}_i}{\partial \xi^j} = \Gamma_{ij}^k~\mathbf{g}_k

This implies that

 \Gamma_{ij}^k = \cfrac{\partial \mathbf{g}_i}{\partial \xi^j}\cdot\mathbf{g}^k = -\mathbf{g}_i\cdot\cfrac{\partial \mathbf{g}^k}{\partial \xi^j}

Other relations that follow are

 \cfrac{\partial \mathbf{g}^i}{\partial \xi^j} = -\Gamma^i_{jk}~\mathbf{g}^k ~;~~ \boldsymbol{\nabla}\mathbf{g}_i = \Gamma_{ij}^k~\mathbf{g}_k\otimes\mathbf{g}^j ~;~~ \boldsymbol{\nabla}\mathbf{g}^i = -\Gamma_{jk}^i~\mathbf{g}^k\otimes\mathbf{g}^j

Another particularly useful relation, which shows that the Christoffel symbol depends only on the metric tensor and its derivatives, is

 \Gamma^k_{ij} = \frac{g^{km}}{2}\left(\frac{\partial g_{mi}}{\partial \xi^j} + \frac{\partial g_{mj}}{\partial \xi^i} - \frac{\partial g_{ij}}{\partial \xi^m} \right)

Explicit expression for the gradient of a vector field

The following expressions for the gradient of a vector field in curvilinear coordinates are quite useful.

 \begin{align} \boldsymbol{\nabla}\mathbf{v} & = \left[\cfrac{\partial v^i}{\partial \xi^k} + \Gamma^i_{lk}~v^l\right]~\mathbf{g}_i\otimes\mathbf{g}^k \ & = \left[\cfrac{\partial v_i}{\partial \xi^k} - \Gamma^l_{ki}~v_l\right]~\mathbf{g}^i\otimes\mathbf{g}^k \end{align}

Representing a physical vector field

The vector field \mathbf{v} can be represented as

 \mathbf{v} = v_i~\mathbf{g}^i = \hat{v}_i~\hat{\mathbf{g}}^i

where v_i\, are the covariant components of the field, \hat{v}_i are the physical components, and

 \hat{\mathbf{g}}^i = \cfrac{\mathbf{g}^i}{\sqrt{g^{ii}}} \qquad \mbox{no sum}

is the normalized contravariant basis vector.

Divergence of a vector field

The divergence of a vector field (\mathbf{v})is defined as

 \mbox{div}~\mathbf{v} = \boldsymbol{\nabla}\cdot\mathbf{v} = \text{tr}(\boldsymbol{\nabla}\mathbf{v})

In terms of components with respect to a curvilinear basis

 \boldsymbol{\nabla}\cdot\mathbf{v} = \cfrac{\partial v^i}{\partial \xi^i} + \Gamma^i_{\ell i}~v^\ell = \left[\cfrac{\partial v_i}{\partial \xi^j} - \Gamma^\ell_{ji}~v_\ell\right]~g^{ij}

Alternative expression for the divergence of a vector field

An alternative equation for the divergence of a vector field is frequently used. To derive this relation recall that

 \boldsymbol{\nabla} \cdot \mathbf{v} = \frac{\partial v^i}{\partial \xi^i} + \Gamma_{\ell i}^i~v^\ell


 \Gamma_{\ell i}^i = \Gamma_{i\ell}^i = \cfrac{g^{mi}}{2}\left[\frac{\partial g_{im}}{\partial \xi^\ell} + \frac{\partial g_{\ell m}}{\partial \xi^i} - \frac{\partial g_{il}}{\partial \xi^m}\right]

Noting that, due to the symmetry of \boldsymbol{g},

 g^{mi}~\frac{\partial g_{\ell m}}{\partial \xi^i} = g^{mi}~ \frac{\partial g_{i\ell}}{\partial \xi^m}

we have

 \boldsymbol{\nabla} \cdot \mathbf{v} = \frac{\partial v^i}{\partial \xi^i} + \cfrac{g^{mi}}{2}~\frac{\partial g_{im}}{\partial \xi^\ell}~v^\ell

Recall that if [gij] is the matrix whose components are gij, then the inverse of the matrix is [gij] − 1 = [gij]. The inverse of the matrix is given by

 [g^{ij}] = [g_{ij}]^{-1} = \cfrac{A^{ij}}{g} ~;~~ g := \det([g_{ij}]) = \det\boldsymbol{g}

where Aij are the cofactor matrices of the components gij. From matrix algebra we have

 g = \det([g_{ij}]) = \sum_i g_{ij}~A^{ij} \quad \implies \quad \frac{\partial g}{\partial g_{ij}} = A^{ij}


 [g^{ij}] = \cfrac{1}{g}~\frac{\partial g}{\partial g_{ij}}

Plugging this relation into the expression for the divergence gives

 \boldsymbol{\nabla} \cdot \mathbf{v} = \frac{\partial v^i}{\partial \xi^i} + \cfrac{1}{2g}~\frac{\partial g}{\partial g_{mi}}~\frac{\partial g_{im}}{\partial \xi^\ell}~v^\ell = \frac{\partial v^i}{\partial \xi^i} + \cfrac{1}{2g}~\frac{\partial g}{\partial \xi^\ell}~v^\ell

A little manipulation leads to the more compact form

 \boldsymbol{\nabla} \cdot \mathbf{v} = \cfrac{1}{\sqrt{g}}~\frac{\partial }{\partial \xi^i}(v^i~\sqrt{g})

Laplacian of a scalar field

The Laplacian of a scalar field \varphi(\mathbf{x}) is defined as

 \nabla^2 \varphi := \boldsymbol{\nabla} \cdot (\boldsymbol{\nabla} \varphi)

Using the alternative expression for the divergence of a vector field gives us

 \nabla^2 \varphi = \cfrac{1}{\sqrt{g}}~\frac{\partial }{\partial \xi^i}([\boldsymbol{\nabla} \varphi]^i~\sqrt{g})


 \boldsymbol{\nabla} \varphi = \frac{\partial \varphi}{\partial \xi^l}~\mathbf{g}^l = g^{li}~\frac{\partial \varphi}{\partial \xi^l}~\mathbf{g}_i \quad \implies \quad [\boldsymbol{\nabla} \varphi]^i = g^{li}~\frac{\partial \varphi}{\partial \xi^l}


 \nabla^2 \varphi = \cfrac{1}{\sqrt{g}}~\frac{\partial }{\partial \xi^i}\left(g^{li}~\frac{\partial \varphi}{\partial \xi^l} ~\sqrt{g}\right)

Gradient of a second-order tensor field

The gradient of a second order tensor field can similarly be expressed as

 \boldsymbol{\nabla}\boldsymbol{S} = \cfrac{\partial \boldsymbol{S}}{\partial \xi^i}\otimes\mathbf{g}^i

Explicit expressions for the gradient

If we consider the expression for the tensor in terms of a contravariant basis, then

 \boldsymbol{\nabla}\boldsymbol{S} = \cfrac{\partial}{\partial \xi^k}[S_{ij}~\mathbf{g}^i\otimes\mathbf{g}^j]\otimes\mathbf{g}^k = \left[\cfrac{\partial S_{ij}}{\partial \xi^k} - \Gamma^l_{ki}~S_{lj} - \Gamma^l_{kj}~S_{il}\right]~\mathbf{g}^i\otimes\mathbf{g}^j\otimes\mathbf{g}^k

We may also write

 \begin{align} \boldsymbol{\nabla}\boldsymbol{S} & = \left[\cfrac{\partial S^{ij}}{\partial \xi^k} + \Gamma^i_{kl}~S_{lj} + \Gamma^j_{kl}~S_{il}\right]~\mathbf{g}_i\otimes\mathbf{g}_j\otimes\mathbf{g}^k \ & = \left[\cfrac{\partial S^i_{~j}}{\partial \xi^k} + \Gamma^i_{kl}~S^l_{~j} - \Gamma^l_{kj}~S^i_{~l}\right]~\mathbf{g}_i\otimes\mathbf{g}^j\otimes\mathbf{g}^k \ & = \left[\cfrac{\partial S_i^{~j}}{\partial \xi^k} - \Gamma^l_{ik}~S_l^{~j} + \Gamma^j_{kl}~S_i^{~l}\right]~\mathbf{g}^i\otimes\mathbf{g}_j\otimes\mathbf{g}^k \end{align}

Representing a physical second-order tensor field

The physical components of a second-order tensor field can be obtained by using a normalized contravariant basis, i.e.,

 \boldsymbol{S} = S_{ij}~\mathbf{g}^i\otimes\mathbf{g}^j = \hat{S}_{ij}~\hat{\mathbf{g}}^i\otimes\hat{\mathbf{g}}^j

where the hatted basis vectors have been normalized. This implies that

 \hat{S}_{ij} = S_{ij}~\sqrt{g^{ii}~g^{jj}} \qquad \mbox{no sum}

Divergence of a second-order tensor field

The divergence of a second-order tensor field is defined using

 (\boldsymbol{\nabla}\cdot\boldsymbol{S})\cdot\mathbf{a} = \boldsymbol{\nabla}\cdot(\mathbf{a}\cdot\boldsymbol{S})

where \mathbf{a} is an arbitrary constant vector.

In curvilinear coordinates,

 \begin{align} \boldsymbol{\nabla}\cdot\boldsymbol{S} & = \left[\cfrac{\partial S_{ij}}{\partial \xi^k} - \Gamma^l_{ki}~S_{lj} - \Gamma^l_{kj}~S_{il}\right]~g^{ik}~\mathbf{g}^j \ & = \left[\cfrac{\partial S^{ij}}{\partial \xi^i} + \Gamma^i_{il}~S_{lj} + \Gamma^j_{il}~S_{il}\right]~\mathbf{g}_j \ & = \left[\cfrac{\partial S^i_{~j}}{\partial \xi^i} + \Gamma^i_{il}~S^l_{~j} - \Gamma^l_{ij}~S^i_{~l}\right]~\mathbf{g}^j \ & = \left[\cfrac{\partial S_i^{~j}}{\partial \xi^k} - \Gamma^l_{ik}~S_l^{~j} + \Gamma^j_{kl}~S_i^{~l}\right]~g^{ik}~\mathbf{g}_j \end{align}

Relations between curvilinear and Cartesian basis vectors

Let (\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3) be the usual Cartesian basis vectors for the Euclidean space of interest and let

 \mathbf{g}_i = \boldsymbol{F}\cdot\mathbf{e}_i

where \boldsymbol{F}_i is a second-order transformation tensor that maps \mathbf{e}_i to \mathbf{g}_i. Then,

 \mathbf{g}_i\otimes\mathbf{e}_i = (\boldsymbol{F}\cdot\mathbf{e}_i)\otimes\mathbf{e}_i = \boldsymbol{F}\cdot(\mathbf{e}_i\otimes\mathbf{e}_i) = \boldsymbol{F}~.

From this relation we can show that

 \mathbf{g}^i = \boldsymbol{F}^{-\rm{T}}\cdot\mathbf{e}^i ~;~~ g^{ij} = [\boldsymbol{F}^{-\rm{1}}\cdot\boldsymbol{F}^{-\rm{T}}]_{ij} ~;~~ g_{ij} = [g^{ij}]^{-1} = [\boldsymbol{F}^{\rm{T}}\cdot\boldsymbol{F}]_{ij}

Let J := \det\boldsymbol{F} be the Jacobian of the transformation. Then, from the definition of the determinant,

 \left[\mathbf{g}_1,\mathbf{g}_2,\mathbf{g}_3\right] = \det\boldsymbol{F}\left[\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3\right] ~.


 \left[\mathbf{e}_1,\mathbf{e}_2,\mathbf{e}_3\right] = 1

we have

 J = \det\boldsymbol{F} = \left[\mathbf{g}_1,\mathbf{g}_2,\mathbf{g}_3\right] = \mathbf{g}_1\cdot(\mathbf{g}_2\times\mathbf{g}_3)

A number of interesting results can be derived using the above relations.

First, consider

 g := \det[g_{ij}]\,


 g = \det[\boldsymbol{F}^{\rm{T}}]\cdot\det[\boldsymbol{F}] = J\cdot J = J^2

Similarly, we can show that

 \det[g^{ij}] = \cfrac{1}{J^2}

Therefore, using the fact that [gij] = [gij] − 1,

 \cfrac{\partial g}{\partial g_{ij}} = 2~J~\cfrac{\partial J}{\partial g_{ij}} = g~g^{ij}

Another interesting relation is derived below. Recall that

 \mathbf{g}^i\cdot\mathbf{g}_j = \delta^i_j \quad \implies \quad \mathbf{g}^1\cdot\mathbf{g}_1 = 1,~\mathbf{g}^1\cdot\mathbf{g}_2=\mathbf{g}^1\cdot\mathbf{g}_3=0 \quad \implies \quad \mathbf{g}^1 = A~(\mathbf{g}_2\times\mathbf{g}_3)

where A is a, yet undetermined, constant. Then

 \mathbf{g}^1\cdot\mathbf{g}_1 = A~\mathbf{g}_1\cdot(\mathbf{g}_2\times\mathbf{g}_3) = AJ = 1 \quad \implies \quad A = \cfrac{1}{J}

This observation leads to the relations

 \mathbf{g}^1 = \cfrac{1}{J}(\mathbf{g}_2\times\mathbf{g}_3) ~;~~ \mathbf{g}^2 = \cfrac{1}{J}(\mathbf{g}_3\times\mathbf{g}_1) ~;~~ \mathbf{g}^3 = \cfrac{1}{J}(\mathbf{g}_1\times\mathbf{g}_2)

In index notation,

 \epsilon_{ijk}~\mathbf{g}^k = \cfrac{1}{J}(\mathbf{g}_i\times\mathbf{g}_j) = \cfrac{1}{\sqrt{g}}(\mathbf{g}_i\times\mathbf{g}_j)

where \epsilon_{ijk}\, is the usual permutation symbol.

We have not identified an explicit expression for the transformation tensor \boldsymbol{F} because an alternative form of the mapping between curvilinear and Cartesian bases is more useful. Assuming a sufficient degree of smoothness in the mapping (and a bit of abuse of notation), we have

 \mathbf{g}_i = \cfrac{\partial\mathbf{x}}{\partial\xi^i} = \cfrac{\partial\mathbf{x}}{\partial x_j}~\cfrac{\partial x_j}{\partial\xi^i} = \mathbf{e}_j~\cfrac{\partial x_j}{\partial\xi^i}


 \mathbf{e}_i = \mathbf{g}_j~\cfrac{\partial \xi^j}{\partial x_i}

From these results we have

 \mathbf{e}^k\cdot\mathbf{g}_i = \frac{\partial x_k}{\partial \xi^i} \quad \implies \quad \frac{\partial x_k}{\partial \xi^i}~\mathbf{g}^i = \mathbf{e}^k\cdot(\mathbf{g}_i\otimes\mathbf{g}^i) = \mathbf{e}^k


 \mathbf{g}^k = \frac{\partial \xi^k}{\partial x_i}~\mathbf{e}^i

Vector products

The cross product of two vectors is given by

 \mathbf{u}\times\mathbf{v} = \epsilon_{ijk}~\hat{u}_j~\hat{v}_k~\mathbf{e}_i

where εijk is the permutation symbol and \mathbf{e}_i is a Cartesian basis vector. Therefore,

 \mathbf{e}_p\times\mathbf{e}_q = \epsilon_{ipq}~\mathbf{e}_i


 \mathbf{g}_m\times\mathbf{g}_n = \frac{\partial \mathbf{x}}{\partial \xi^m}\times\frac{\partial \mathbf{x}}{\partial \xi^n} = \frac{\partial (x_p~\mathbf{e}_p)}{\partial \xi^m}\times\frac{\partial (x_q~\mathbf{e}_q)}{\partial \xi^n} = \frac{\partial x_p}{\partial \xi^m}~\frac{\partial x_q}{\partial \xi^n}~\mathbf{e}_p\times\mathbf{e}_q = \epsilon_{ipq}~\frac{\partial x_p}{\partial \xi^m}~\frac{\partial x_q}{\partial \xi^n}~\mathbf{e}_i


 (\mathbf{g}_m\times\mathbf{g}_n)\cdot\mathbf{g}_s = \epsilon_{ipq}~\frac{\partial x_p}{\partial \xi^m}~\frac{\partial x_q}{\partial \xi^n}~\frac{\partial x_i}{\partial \xi^s}

Returning back to the vector product and using the relations

 \hat{u}_j = \frac{\partial x_j}{\partial \xi^m}~u^m ~;~~ \hat{v}_k = \frac{\partial x_k}{\partial \xi^n}~v^n ~;~~ \mathbf{e}_i = \frac{\partial x_i}{\partial \xi^s}~\mathbf{g}^s

gives us

 \mathbf{u}\times\mathbf{v} = \epsilon_{ijk}~\hat{u}_j~\hat{v}_k~\mathbf{e}_i = \epsilon_{ijk}~\frac{\partial x_j}{\partial \xi^m}~\frac{\partial x_k}{\partial \xi^n}~\frac{\partial x_i}{\partial \xi^s}~ u^m~v^n~\mathbf{g}^s = [(\mathbf{g}_m\times\mathbf{g}_n)\cdot\mathbf{g}_s]~u^m~v^n~\mathbf{g}^s = \mathcal{E}_{smn}~u^m~v^n~\mathbf{g}^s

The alternating tensor

In an orthonormal right-handed basis, the third-order alternating tensor is defined as

 \boldsymbol{\mathcal{E}} = \epsilon_{ijk}~\mathbf{e}^i\otimes\mathbf{e}^j\otimes\mathbf{e}^k

In a general curvilinear basis the same tensor may be expressed as

 \boldsymbol{\mathcal{E}} = \mathcal{E}_{ijk}~\mathbf{g}^i\otimes\mathbf{g}^j\otimes\mathbf{g}^k = \mathcal{E}^{ijk}~\mathbf{g}_i\otimes\mathbf{g}_j\otimes\mathbf{g}_k

It can be shown that

 \mathcal{E}_{ijk} = \left[\mathbf{g}_i,\mathbf{g}_j,\mathbf{g}_k\right] =(\mathbf{g}_i\times\mathbf{g}_j)\cdot\mathbf{g}_k ~;~~ \mathcal{E}^{ijk} = \left[\mathbf{g}^i,\mathbf{g}^j,\mathbf{g}^k\right]


 \mathbf{g}_i\times\mathbf{g}_j = J~\epsilon_{ijp}~\mathbf{g}^p = \sqrt{g}~\epsilon_{ijp}~\mathbf{g}^p


 \mathcal{E}_{ijk} = J~\epsilon_{ijk} = \sqrt{g}~\epsilon_{ijk}

Similarly, we can show that

 \mathcal{E}^{ijk} = \cfrac{1}{J}~\epsilon^{ijk} = \cfrac{1}{\sqrt{g}}~\epsilon^{ijk}

Example: Cylindrical polar coordinates

For cylindrical coordinates we have

 (x_1, x_2, x_3) = \mathbf{x} = \boldsymbol{\varphi}(\xi^1, \xi^2, \xi^3) = \boldsymbol{\varphi}(r, \theta, z) = \{r\cos\theta, r\sin\theta, z\}


 \{\psi^1(\mathbf{x}), \psi^2(\mathbf{x}), \psi^3(\mathbf{x})\} = (\xi^1, \xi^2, \xi^3) \equiv (r, \theta, z) = \{ \sqrt{x_1^2+x_2^2}, \tan^{-1}(x_2/x_1), x_3\}


 0 < r < \infty ~, ~~ 0 < \theta < 2\pi ~,~~ -\infty < z < \infty

Then the covariant and contravariant basis vectors are

 \begin{align} \mathbf{g}_1 & = \mathbf{e}_r = \mathbf{g}^1 \ \mathbf{g}_2 & = r~\mathbf{e}_\theta = r^2~\mathbf{g}^2 \ \mathbf{g}_3 & = \mathbf{e}_z = \mathbf{g}^3 \end{align}

where \mathbf{e}_r, \mathbf{e}_\theta, \mathbf{e}_z are the unit vectors in the r,θ,z directions.

Note that the components of the metric tensor are such that

 g^{ij} = g_{ij} = 0 (i \ne j) ~;~~ \sqrt{g^{11}} = 1,~\sqrt{g^{22}} = \cfrac{1}{r},~\sqrt{g^{33}}=1

which shows that the basis is orthogonal.

The non-zero components of the Christoffel symbol of the second kind are

 \Gamma_{12}^2 = \Gamma_{21}^2 = \cfrac{1}{r} ~;~~ \Gamma_{22}^1 = -r

Representing a physical vector field

The normalized contravariant basis vectors in cylindrical polar coordinates are

 \hat{\mathbf{g}}^1 = \mathbf{e}_r ~;~~\hat{\mathbf{g}}^2 = \mathbf{e}_\theta ~;~~\hat{\mathbf{g}}^3 = \mathbf{e}_z

and the physical components of a vector \mathbf{v} are

 (\hat{v}_1, \hat{v}_2, \hat{v}_3) = (v_1, v_2/r, v_3) =: (v_r, v_\theta, v_z)

Gradient of a scalar field

The gradient of a scalar field, f(\mathbf{x}), in cylindrical coordinates can now be computed from the general expression in curvilinear coordinates and has the form

 \boldsymbol{\nabla}f = \cfrac{\partial f}{\partial r}~\mathbf{e}_r + \cfrac{1}{r}~\cfrac{\partial f}{\partial \theta}~\mathbf{e}_\theta + \cfrac{\partial f}{\partial z}~\mathbf{e}_z

Gradient of a vector field

Similarly, the gradient of a vector field, \mathbf{v}(\mathbf{x}), in cylindrical coordinates can be shown to be

 \begin{align} \boldsymbol{\nabla}\mathbf{v} & = \cfrac{\partial v_r}{\partial r}~\mathbf{e}_r\otimes\mathbf{e}_r + \cfrac{1}{r}\left(\cfrac{\partial v_r}{\partial \theta} - v_\theta\right)~\mathbf{e}_r\otimes\mathbf{e}_\theta + \cfrac{\partial v_r}{\partial z}~\mathbf{e}_r\otimes\mathbf{e}_z \ & + \cfrac{\partial v_\theta}{\partial r}~\mathbf{e}_\theta\otimes\mathbf{e}_r + \cfrac{1}{r}\left(\cfrac{\partial v_\theta}{\partial \theta} + v_r \right)~\mathbf{e}_\theta\otimes\mathbf{e}_\theta + \cfrac{\partial v_\theta}{\partial z}~\mathbf{e}_\theta\otimes\mathbf{e}_z \ & + \cfrac{\partial v_z}{\partial r}~\mathbf{e}_z\otimes\mathbf{e}_r + \cfrac{1}{r}\cfrac{\partial v_z}{\partial \theta}~\mathbf{e}_z\otimes\mathbf{e}_\theta + \cfrac{\partial v_z}{\partial z}~\mathbf{e}_z\otimes\mathbf{e}_z \end{align}

Divergence of a vector field

Using the equation for the divergence of a vector field in curvilinear coordinates, the divergence in cylindrical coordinates can be shown to be

 \begin{align} \boldsymbol{\nabla}\cdot\mathbf{v} & = \cfrac{\partial v_r}{\partial r} + \cfrac{1}{r}\left(\cfrac{\partial v_\theta}{\partial \theta} + v_r \right) + \cfrac{\partial v_z}{\partial z} \end{align}

Representing a physical second-order tensor field

The physical components of a second-order tensor field are those obtained when the tensor is expressed in terms of a normalized contravariant basis. In cylindrical polar coordinates these components are

 \begin{align} \hat{S}_{11} & = S_{11} =: S_{rr} ~;~~\hat{S}_{12} = \cfrac{S_{12}}{r} =: S_{r\theta} ~;~~ \hat{S}_{13} & = S_{13} =: S_{rz} \ \hat{S}_{21} & = \cfrac{S_{11}}{r} =: S_{\theta r} ~;~~\hat{S}_{22} = \cfrac{S_{22}}{r^2} =: S_{\theta\theta} ~;~~ \hat{S}_{23} & = \cfrac{S_{23}}{r} =: S_{\theta z} \ \hat{S}_{31} & = S_{31} =: S_{zr} ~;~~\hat{S}_{32} = \cfrac{S_{32}}{r} =: S_{z\theta} ~;~~ \hat{S}_{33} & = S_{33} =: S_{zz} \end{align}

Gradient of a second-order tensor field

Using the above definitions we can show that the gradient of a second-order tensor field in cylindrical polar coordinates can be expressed as

 \begin{align} \boldsymbol{\nabla} \boldsymbol{S} & = \frac{\partial S_{rr}}{\partial r}~\mathbf{e}_r\otimes\mathbf{e}_r\otimes\mathbf{e}_r + \cfrac{1}{r}\left[\frac{\partial S_{rr}}{\partial \theta} - (S_{\theta r}+S_{r\theta})\right]~\mathbf{e}_r\otimes\mathbf{e}_r\otimes\mathbf{e}_\theta + \frac{\partial S_{rr}}{\partial z}~\mathbf{e}_r\otimes\mathbf{e}_r\otimes\mathbf{e}_z \ & + \frac{\partial S_{r\theta}}{\partial r}~\mathbf{e}_r\otimes\mathbf{e}_\theta\otimes\mathbf{e}_r + \cfrac{1}{r}\left[\frac{\partial S_{r\theta}}{\partial \theta} + (S_{rr}-S_{\theta\theta})\right]~\mathbf{e}_r\otimes\mathbf{e}_\theta\otimes\mathbf{e}_\theta + \frac{\partial S_{r\theta}}{\partial z}~\mathbf{e}_r\otimes\mathbf{e}_\theta\otimes\mathbf{e}_z \ & + \frac{\partial S_{rz}}{\partial r}~\mathbf{e}_r\otimes\mathbf{e}_z\otimes\mathbf{e}_r + \cfrac{1}{r}\left[\frac{\partial S_{rz}}{\partial \theta} -S_{\theta z}\right]~\mathbf{e}_r\otimes\mathbf{e}_z\otimes\mathbf{e}_\theta + \frac{\partial S_{rz}}{\partial z}~\mathbf{e}_r\otimes\mathbf{e}_z\otimes\mathbf{e}_z \ & + \frac{\partial S_{\theta r}}{\partial r}~\mathbf{e}_\theta\otimes\mathbf{e}_r\otimes\mathbf{e}_r + \cfrac{1}{r}\left[\frac{\partial S_{\theta r}}{\partial \theta} + (S_{rr}-S_{\theta\theta})\right]~\mathbf{e}_\theta\otimes\mathbf{e}_r\otimes\mathbf{e}_\theta + \frac{\partial S_{\theta r}}{\partial z}~\mathbf{e}_\theta\otimes\mathbf{e}_r\otimes\mathbf{e}_z \ & + \frac{\partial S_{\theta\theta}}{\partial r}~\mathbf{e}_\theta\otimes\mathbf{e}_\theta\otimes\mathbf{e}_r + \cfrac{1}{r}\left[\frac{\partial S_{\theta\theta}}{\partial \theta} + (S_{r\theta}+S_{\theta r})\right]~\mathbf{e}_\theta\otimes\mathbf{e}_\theta\otimes\mathbf{e}_\theta + \frac{\partial S_{\theta\theta}}{\partial z}~\mathbf{e}_\theta\otimes\mathbf{e}_\theta\otimes\mathbf{e}_z \ & + \frac{\partial S_{\theta z}}{\partial r}~\mathbf{e}_\theta\otimes\mathbf{e}_z\otimes\mathbf{e}_r + \cfrac{1}{r}\left[\frac{\partial S_{\theta z}}{\partial \theta} + S_{rz}\right]~\mathbf{e}_\theta\otimes\mathbf{e}_z\otimes\mathbf{e}_\theta + \frac{\partial S_{\theta z}}{\partial z}~\mathbf{e}_\theta\otimes\mathbf{e}_z\otimes\mathbf{e}_z \ & + \frac{\partial S_{zr}}{\partial r}~\mathbf{e}_z\otimes\mathbf{e}_r\otimes\mathbf{e}_r + \cfrac{1}{r}\left[\frac{\partial S_{zr}}{\partial \theta} - S_{z\theta}\right]~\mathbf{e}_z\otimes\mathbf{e}_r\otimes\mathbf{e}_\theta + \frac{\partial S_{zr}}{\partial z}~\mathbf{e}_z\otimes\mathbf{e}_r\otimes\mathbf{e}_z \ & + \frac{\partial S_{z\theta}}{\partial r}~\mathbf{e}_z\otimes\mathbf{e}_\theta\otimes\mathbf{e}_r + \cfrac{1}{r}\left[\frac{\partial S_{z\theta}}{\partial \theta} + S_{zr}\right]~\mathbf{e}_z\otimes\mathbf{e}_\theta\otimes\mathbf{e}_\theta + \frac{\partial S_{z\theta}}{\partial z}~\mathbf{e}_z\otimes\mathbf{e}_\theta\otimes\mathbf{e}_z \ & + \frac{\partial S_{zz}}{\partial r}~\mathbf{e}_z\otimes\mathbf{e}_z\otimes\mathbf{e}_r + \cfrac{1}{r}~\frac{\partial S_{zz}}{\partial \theta}~\mathbf{e}_z\otimes\mathbf{e}_z\otimes\mathbf{e}_\theta + \frac{\partial S_{zz}}{\partial z}~\mathbf{e}_z\otimes\mathbf{e}_z\otimes\mathbf{e}_z \end{align}

Divergence of a second-order tensor field

The divergence of a second-order tensor field in cylindrical polar coordinates can be obtained from the expression for the gradient by collecting terms where the scalar product of the two outer vectors in the dyadic products is nonzero. Therefore,

 \begin{align} \boldsymbol{\nabla}\cdot \boldsymbol{S} & = \frac{\partial S_{rr}}{\partial r}~\mathbf{e}_r + \frac{\partial S_{r\theta}}{\partial r}~\mathbf{e}_\theta + \frac{\partial S_{rz}}{\partial r}~\mathbf{e}_z \ & + \cfrac{1}{r}\left[\frac{\partial S_{\theta r}}{\partial \theta} + (S_{rr}-S_{\theta\theta})\right]~\mathbf{e}_r + \cfrac{1}{r}\left[\frac{\partial S_{\theta\theta}}{\partial \theta} + (S_{r\theta}+S_{\theta r})\right]~\mathbf{e}_\theta +\cfrac{1}{r}\left[\frac{\partial S_{\theta z}}{\partial \theta} + S_{rz}\right]~\mathbf{e}_z \ & + \frac{\partial S_{zr}}{\partial z}~\mathbf{e}_r + \frac{\partial S_{z\theta}}{\partial z}~\mathbf{e}_\theta + \frac{\partial S_{zz}}{\partial z}~\mathbf{e}_z \end{align}

Orthogonal curvilinear coordinates

Assume, for the purposes of this section, that the curvilinear coordinate system is orthogonal, i.e.,

 \mathbf{g}_i\cdot\mathbf{g}_j = \mathbf{g}^i\cdot\mathbf{g}^j = \begin{cases} g_{ii} = g^{ii} & \mbox{if}~ i = j \ 0 & \mbox{if}~ i \ne j \end{cases}

where \mathbf{g}_i, \mathbf{g}_j are covariant basis vectors, \mathbf{g}^i, \mathbf{g}^j are contravariant basis vectors. Also, let (\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3) be a background, fixed, Cartesian basis.

Metric tensor in orthogonal curvilinear coordinates

Let \mathbf{r}(\mathbf{x}) be the position vector of the point \mathbf{x} with respect to the origin of the coordinate system. The notation can be simplified by noting that \mathbf{x} = \mathbf{r}(\mathbf{x}). At each point we can construct a small line element \rm{d}\mathbf{x}. The square of the length of the line element is the scalar product \rm{d}\mathbf{x} \cdot \rm{d}\mathbf{x} and is called the metric of the space. Recall that the space of interest is assumed to be Euclidean when we talk of curvilinear coordinates. Let us express the position vector in terms of the background, fixed, Cartesian basis, i.e.,

 \mathbf{x} = \sum_{i=1}^3 x_i~\mathbf{e}_i

Using the chain rule, we can then express \rm{d}\mathbf{x}in terms of three-dimensional orthogonal curvilinear coordinates 123) as

 \mbox{d}\mathbf{x} = \sum_{i=1}^3 \sum_{j=1}^3 \left(\cfrac{\partial x_i}{\partial\xi^j}~\mathbf{e}_i\right)\mbox{d}\xi^j

Therefore the metric is given by

 \mbox{d}\mathbf{x}\cdot\mbox{d}\mathbf{x} = \sum_{i=1}^3 \sum_{j=1}^3 \sum_{k=1}^3 \cfrac{\partial x_i}{\partial\xi^j}~\cfrac{\partial x_i}{\partial\xi^k}~\mbox{d}\xi^j~\mbox{d}\xi^k

The symmetric quantity

 g_{ij}(\xi^i,\xi^j) = \sum_{k=1}^3 \cfrac{\partial x_k}{\partial\xi^i}~\cfrac{\partial x_k}{\partial\xi^j} = \mathbf{g}_i\cdot\mathbf{g}_j

is called the fundamental (or metric) tensor of the Euclidean space in curvilinear coordinates.

Note also that

 g_{ij} = \cfrac{\partial\mathbf{x}}{\partial\xi^i}\cdot\cfrac{\partial\mathbf{x}}{\partial\xi^j} = \left(\sum_{k} h_{ki}~\mathbf{e}_k\right)\cdot\left(\sum_{m} h_{mj}~\mathbf{e}_m\right) = \sum_{k} h_{ki}~h_{kj}

where h_{ij}\, are the Lamé coefficients.

If we define the scale factors, h_i\,, using

 \mathbf{g}_i\cdot\mathbf{g}_i = g_{ii} = \sum_{k} h_{ki}^2 =: h_i^2 \quad \implies \quad \left|\cfrac{\partial\mathbf{x}}{\partial\xi^i}\right| = \left|\mathbf{g}_i\right| = \sqrt{g_{ii}} = h_i

we get a relation between the fundamental tensor and the Lamé coefficients.

Example: Polar coordinates

If we consider polar coordinates for R2, note that

 (x, y)=(r \cos \theta, r \sin \theta) \,\!

(r, θ) are the curvilinear coordinates, and the Jacobian determinant of the transformation (r,θ) → (r cos θ, r sin θ) is r.

The orthogonal basis vectors are gr = (cos θ, sin θ), gθ = (−r sin θ, r cos θ). The normalized basis vectors are er = (cos θ, sin θ), eθ = (−sin θ, cos θ) and the scale factors are hr = 1 and hθ= r. The fundamental tensor is g11 =1, g22 =r2, g12 = g21 =0.

Line and surface integrals

If we wish to use curvilinear coordinates for vector calculus calculations, adjustments need to be made in the calculation of line, surface and volume integrals. For simplicity, we again restrict the discussion to three dimensions and orthogonal curvilinear coordinates. However, the same arguments apply for n-dimensional problems though there are some additional terms in the expressions when the coordinate system is not orthogonal.

Line integrals

Normally in the calculation of line integrals we are interested in calculating

 \int_C f \,ds = \int_a^b f(\mathbf{x}(t))\left|{\partial \mathbf{x} \over \partial t}\right|\; dt

where x(t) parametrizes C in Cartesian coordinates. In curvilinear coordinates, the term

 \left|{\partial \mathbf{x} \over \partial t}\right| = \left| \sum_{i=1}^3 {\partial \mathbf{x} \over \partial \xi^i}{\partial \xi^i \over \partial t}\right|

by the chain rule. And from the definition of the Lamé coefficients,

 {\partial \mathbf{x} \over \partial \xi^i} = \sum_{k} h_{ki}~ \mathbf{e}_{k}

and thus

 \left|{\partial \mathbf{x} \over \partial t}\right| = \left| \sum_k\left(\sum_i h_{ki}~\cfrac{\partial \xi^i}{\partial t}\right)\mathbf{e}_k\right| = \sqrt{\sum_i\sum_j\sum_k h_{ki}~h_{kj}\cfrac{\partial \xi^i}{\partial t}\cfrac{\partial \xi^j}{\partial t}} = \sqrt{\sum_i\sum_j g_{ij}~\cfrac{\partial \xi^i}{\partial t}\cfrac{\partial \xi^j}{\partial t}}

Now, since g_{ij} = 0\, when  i \ne j , we have

 \left|{\partial \mathbf{x} \over \partial t}\right| = \sqrt{\sum_i g_{ii}~\left(\cfrac{\partial \xi^i}{\partial t}\right)^2} = \sqrt{\sum_i h_{i}^2~\left(\cfrac{\partial \xi^i}{\partial t}\right)^2}

and we can proceed normally.

Surface integrals

Likewise, if we are interested in a surface integral, the relevant calculation, with the parameterization of the surface in Cartesian coordinates is:

\int_S f \,dS = \iint_T f(\mathbf{x}(s, t)) \left|{\partial \mathbf{x} \over \partial s}\times {\partial \mathbf{x} \over \partial t}\right| ds dt

Again, in curvilinear coordinates, we have

 \left|{\partial \mathbf{x} \over \partial s}\times {\partial \mathbf{x} \over \partial t}\right| = \left|\left(\sum_i {\partial \mathbf{x} \over \partial \xi^i}{\partial \xi^i \over \partial s}\right) \times \left(\sum_j {\partial \mathbf{x} \over \partial \xi^j}{\partial \xi^j \over \partial t}\right)\right|

and we make use of the definition of curvilinear coordinates again to yield

 {\partial \mathbf{x} \over \partial \xi^i}{\partial \xi^i \over \partial s} = \sum_k \left(\sum_{i=1}^3 h_{ki}~{\partial \xi^i \over \partial s}\right) \mathbf{e}_{k} ~;~~ {\partial \mathbf{x} \over \partial \xi^j}{\partial \xi^j \over \partial t} = \sum_m \left(\sum_{j=1}^3 h_{mj}~{\partial \xi^j \over \partial t}\right) \mathbf{e}_{m}


 \begin{align} \left|{\partial \mathbf{x} \over \partial s}\times {\partial \mathbf{x} \over \partial t}\right| & = \left| \sum_k \sum_m \left(\sum_{i=1}^3 h_{ki}~{\partial \xi^i \over \partial s}\right)\left(\sum_{j=1}^3 h_{mj}~{\partial \xi^j \over \partial t}\right) \mathbf{e}_{k}\times\mathbf{e}_{m} \right| \ & = \left|\sum_p \sum_k \sum_m \mathcal{E}_{kmp}\left(\sum_{i=1}^3 h_{ki}~{\partial \xi^i \over \partial s}\right)\left(\sum_{j=1}^3 h_{mj}~{\partial \xi^j \over \partial t}\right) \mathbf{e}_p \right| \end{align}

where \mathcal{E} is the permutation symbol.

In determinant form, the cross product in terms of curvilinear coordinates will be:

\begin{vmatrix} \mathbf{e}_{1} & \mathbf{e}_{2} & \mathbf{e}_{3} \ && \ \sum_i h_{1i} {\partial \xi^i \over \partial s} & \sum_i h_{2i} {\partial \xi^i \over \partial s} & \sum_i h_{3i} {\partial \xi^i \over \partial s} \ && \ \sum_j h_{1j} {\partial \xi^j \over \partial t} & \sum_j h_{2j} {\partial \xi^j \over \partial t} & \sum_j h_{3j} {\partial \xi^j \over \partial t} \end{vmatrix}

Grad, curl, div, Laplacian

In orthogonal curvilinear coordinates of 3 dimensions, where

 \mathbf{g}^i = \sum_k g^{ik}~\mathbf{g}_k ~;~~ g^{ii} = \cfrac{1}{g_{ii}} = \cfrac{1}{h_i^2}

one can express the gradient of a scalar or vector field as

 \nabla\varphi = \sum_{i} {\partial\varphi \over \partial \xi^i}~ \mathbf{g}^i = \sum_{i} \sum_j {\partial\varphi \over \partial \xi^i}~ g^{ij}~\mathbf{g}_j = \sum_i \cfrac{1}{h_i^2}~{\partial f \over \partial \xi^i}~\mathbf{g}_i ~;~~ \nabla\mathbf{v} = \sum_i \cfrac{1}{h_i^2}~{\partial \mathbf{v} \over \partial \xi^i}\otimes\mathbf{g}_i

For an orthogonal basis

 g = g_{11}~g_{22}~g_{33} = h_1^2~h_2^2~h_3^2 \quad \implies \quad \sqrt{g} = h_1~h_2~h_3

The divergence of a vector field can then be written as

 \boldsymbol{\nabla} \cdot \mathbf{v} = \cfrac{1}{h_1~h_2~h_3}~\frac{\partial }{\partial \xi^i}(h_1~h_2~h_3~v^i)


 v^i = g^{ik}~v_k \quad \implies v^1 = g^{11}~v_1 = \cfrac{v_1}{h_1^2} ~;~~ v^2 = g^{22}~v_2 = \cfrac{v_2}{h_2^2}~;~~ v^3 = g^{33}~v_3 = \cfrac{v_3}{h_3^2}


 \boldsymbol{\nabla} \cdot \mathbf{v} = \cfrac{1}{h_1~h_2~h_3}~\sum_i \frac{\partial }{\partial \xi^i}\left(\cfrac{h_1~h_2~h_3}{h_i^2}~v^i\right)

We can get an expression for the Laplacian in a similar manner by noting that

 g^{li}~\frac{\partial \varphi}{\partial \xi^l} = \left\{ g^{11}~\frac{\partial \varphi}{\partial \xi^1}, g^{22}~\frac{\partial \varphi}{\partial \xi^2}, g^{33}~\frac{\partial \varphi}{\partial \xi^3} \right\} = \left\{ \cfrac{1}{h_1^2}~\frac{\partial \varphi}{\partial \xi^1}, \cfrac{1}{h_2^2}~\frac{\partial \varphi}{\partial \xi^2}, \cfrac{1}{h_3^2}~\frac{\partial \varphi}{\partial \xi^3} \right\}

Then we have

 \nabla^2 \varphi = \cfrac{1}{h_1~h_2~h_3}~\sum_i\frac{\partial }{\partial \xi^i}\left(\cfrac{h_1~h_2~h_3}{h_i^2}~\frac{\partial \varphi}{\partial \xi^i}\right)

The expressions for the gradient, divergence, and Laplacian can be directly extended to n-dimensions.

The curl of a vector field is given by

 \nabla\times\mathbf{v} = \frac{1}{\Omega} \sum_{i=1}^n \mathbf{e}_i \sum_{jk} \epsilon_{ijk} h_i \frac{\partial (h_k v_k)}{\partial\xi^j} \qquad (\hbox{only for } n=3)

where Ω is the product of all hi and εijk is the Levi-Civita symbol.

Fictitious forces in general curvilinear coordinates

An inertial coordinate system is defined as a system of space and time coordinates x1,x2,x3,t in terms of which the equations of motion of a particle free of external forces are simply d2xj/dt2 = 0.[4] In this context, a coordinate system can fail to be “inertial” either due to non-straight time axis or non-straight space axes (or both). In other words, the basis vectors of the coordinates may vary in time at fixed positions, or they may vary with position at fixed times, or both. When equations of motion are expressed in terms of any non-inertial coordinate system (in this sense), extra terms appear, called Christoffel symbols. Strictly speaking, these terms represent components of the absolute acceleration (in classical mechanics), but we may also choose to continue to regard d2xj/dt2 as the acceleration (as if the coordinates were inertial) and treat the extra terms as if they were forces, in which case they are called fictitious forces.[5] The component of any such fictitious force normal to the path of the particle and in the plane of the path’s curvature is then called centrifugal force.[6]

This more general context makes clear the correspondence between the concepts of centrifugal force in rotating coordinate systems and in stationary curvilinear coordinate systems. (Both of these concepts appear frequently in the literature.[7][8][9]) For a simple example, consider a particle of mass m moving in a circle of radius r with angular speed w relative to a system of polar coordinates rotating with angular speed W. The radial equation of motion is mr” = Fr + mr(w+W)2. Thus the centrifugal force is mr times the square of the absolute rotational speed A = w + W of the particle. If we choose a coordinate system rotating at the speed of the particle, then W = A and w = 0, in which case the centrifugal force is mrA2, whereas if we choose a stationary coordinate system we have W = 0 and w = A, in which case the centrifugal force is again mrA2. The reason for this equality of results is that in both cases the basis vectors at the particle’s location are changing in time in exactly the same way. Hence these are really just two different ways of describing exactly the same thing, one description being in terms of rotating coordinates and the other being in terms of stationary curvilinear coordinates, both of which are non-inertial according to the more abstract meaning of that term.

When describing general motion, the actual forces acting on a particle are often referred to the instantaneous osculating circle tangent to the path of motion, and this circle in the general case is not centered at a fixed location, and so the decomposition into centrifugal and Coriolis components is constantly changing. This is true regardless of whether the motion is described in terms of stationary or rotating coordinates.

See also


  1. ^ McConnell, A.J. (1957) Application of Tensor Analysis, Dover Publications, Inc., New York, NY, Ch. 9, sec. 1
  2. ^ Boothby, W. M. (2002) An Introduction to Differential Manifolds and Riemannian Geometry, Revised, Academic Press, New York, NY.
  3. ^ Ogden, R. W., 2000, Nonlinear elastic deformations, Dover.
  4. ^ Michael Friedman's "The Foundations of Space-Time Theories", Princeton University Press, 1989.
  5. ^ (1) "An Introduction to the Coriolis Force" By Henry M. Stommel, Dennis W. Moore, 1989 Columbia University Press.
  6. ^ "Statics and Dynamics", Beer and Johnston, McGraw-Hill, 2nd ed., p 485, 1972.
  7. ^ "Methods of Applied Mathematics" By Francis B. Hildebrand, 1992, Dover, p 156.
  8. ^ "Statistical Mechanics" By Donald Allan McQuarrie, 2000, University Science Books.
  9. ^ "Essential Mathematical Methods for Physicists" By Hans-Jurgen Weber, George Brown Arfken, Academic Press, 2004, p 843.
  • M. R. Spiegel, Vector Analysis, Schaum's Outline Series, New York, (1959).
  • Arfken, George (1995). Mathematical Methods for Physicists. Academic Press.  

External links


Got something to say? Make a comment.
Your name
Your email address