This subsection is optional. It consists of proofs of two results from the prior subsection. These proofs involve the properties of permutations, which will not be used later, except in the optional Jordan Canonical Form subsection.
The prior subsection attacks the problem of showing that for any size there is a determinant function on the set of square matrices of that size by using multilinearity to develop the permutation expansion.^{[Index]}^{[Index]}
This reduces the problem to showing that there is a determinant function on the set of permutation matrices of that size.
Of course, a permutation matrix can be rowswapped to the identity matrix and to calculate its determinant we can keep track of the number of row swaps. However, the problem is still not solved. We still have not shown that the result is welldefined. For instance, the determinant of
could be computed with one swap
or with three.
Both reductions have an odd number of swaps so we figure that but how do we know that there isn't some way to do it with an even number of swaps? Corollary 4.6 below proves that there is no permutation matrix that can be rowswapped to an identity matrix in two ways, one with an even number of swaps and the other with an odd number of swaps.
Two rows of a permutation matrix
such that k > j are in an inversion^{[Index]}^{[Index]} of their natural order.
This permutation matrix
has three inversions: ι_{3} precedes ι_{1}, ι_{3} precedes ι_{2}, and ι_{2} precedes ι_{1}.
A rowswap in a permutation matrix changes the number of inversions from even to odd, or from odd to even.
Consider a swap of rows j and k, where k > j. If the two rows are adjacent
then the swap changes the total number of inversions by one — either removing or producing one inversion, depending on whether φ(j) > φ(k) or not, since inversions involving rows not in this pair are not affected. Consequently, the total number of inversions changes from odd to even or from even to odd.
If the rows are not adjacent then they can be swapped via a sequence of adjacent swaps, first bringing row k up
and then bringing row j down.
Each of these adjacent swaps changes the number of inversions from odd to even or from even to odd. There are an odd number (k − j) + (k − j − 1) of them. The total change in the number of inversions is from even to odd or from odd to even.
The signum^{[Index]}^{[Index]} ^{[Index]}} of a permutation sgn(φ) is + 1 if the number of inversions in P_{φ} is even, and is − 1 if the number of inversions is odd.
With the subscripts from Example 3.8 for the 3permutations, sgn(φ_{1}) = 1 while sgn(φ_{2}) = − 1.
If a permutation matrix has an odd number of inversions then swapping it to the identity takes an odd number of swaps. If it has an even number of inversions then swapping to the identity takes an even number of swaps.
The identity matrix has zero inversions. To change an odd number to zero requires an odd number of swaps, and to change an even number to zero requires an even number of swaps.
We still have not shown that the permutation expansion is welldefined because we have not considered row operations on permutation matrices other than row swaps. We will finesse this problem: we will define a function by altering the permutation expansion formula, replacing with sgn(φ)
(this gives the same value as the permutation expansion because the prior result shows that det(P_{φ}) = sgn(φ)). This formula's advantage is that the number of inversions is clearly welldefined — just count them. Therefore, we will show that a determinant function exists for all sizes by showing that d is it, that is, that d satisfies the four conditions.
^{[Index]} The function d is a determinant. Hence determinants exist for every n.
We'll must check that it has the four properties from the definition.
Property (4) is easy; in
all of the summands are zero except for the product down the diagonal, which is one.
For property (3) consider where .
Factor the k out of each term to get the desired equality.
For (2), let .
To convert to unhatted t's, for each φ consider the permutation σ that equals φ except that the ith and jth numbers are interchanged, σ(i) = φ(j) and σ(j) = φ(i). Replacing the φ in with this σ gives . Now sgn(φ) = − sgn(σ) (by Lemma 4.3) and so we get
where the sum is over all permutations σ derived from another permutation φ by a swap of the ith and jth numbers. But any permutation can be derived from some other permutation by such a swap, in one and only one way, so this summation is in fact a sum over all permutations, taken once and only once. Thus .
To do property (1) let and consider
(notice: that's kt_{i,φ(j)}, not kt_{j,φ(j)}). Distribute, commute, and factor.
We finish by showing that the terms add to zero. This sum represents d(S) where S is a matrix equal to T except that row j of S is a copy of row i of T (because the factor is t_{i,φ(j)}, not t_{j,φ(j)}). Thus, S has two equal rows, rows i and j. Since we have already shown that d changes sign on row swaps, as in Lemma 2.3 we conclude that d(S) = 0.
We have now shown that determinant functions exist for each size. We already know that for each size there is at most one determinant. Therefore, the permutation expansion computes the one and only determinant value of a square matrix.
We end this subsection by proving the other result remaining from the prior subsection, that the determinant of a matrix equals the determinant of its transpose.
Writing out the permutation expansion of the general matrix and of its transpose, and comparing corresponding terms
(terms with the same letters)
shows that the corresponding permutation matrices are transposes. That is, there is a relationship between these corresponding permutations. Problem 6 shows that they are inverses.
^{[Index]} The determinant of a matrix equals the determinant of its transpose.
Call the matrix T and denote the entries of T^{trans} with s's so that t_{i,j} = s_{j,i}. Substitution gives this
and we can finish the argument by manipulating the expression on the right to be recognizable as the determinant of the transpose. We have written all permutation expansions (as in the middle expression above) with the row indices ascending. To rewrite the expression on the right in this way, note that because φ is a permutation, the row indices in the term on the right φ(1), ..., φ(n) are just the numbers 1, ..., n, rearranged. We can thus commute to have these ascend, giving (if the column index is j and the row index is φ(j) then, where the row index is i, the column index is φ ^{− 1}(i)). Substituting on the right gives
(Problem 5 shows that sgn(φ ^{− 1}) = sgn(φ)). Since every permutation is the inverse of another, a sum over all φ ^{− 1} is a sum over all permutations φ
as required.
These summarize the notation used in this book for the 2 and 3 permutations.
Give the permutation expansion of a general matrix and its transpose.
This is the permutation expansion of the determinant of a matrix
and the permutation expansion of the determinant of its transpose.
As with the expansions described in the subsection, the permutation matrices from corresponding terms are transposes (although this is disguised by the fact that each is selftranspose).
This problem appears also in the prior subsection.
Each of these is easy to check.
permutation  φ_{1}  φ_{2} 
inverse  φ_{1}  φ_{2} 
permutation  φ_{1}  φ_{2}  φ_{3}  φ_{4}  φ_{5}  φ_{6} 
inverse  φ_{1}  φ_{2}  φ_{3}  φ_{5}  φ_{4}  φ_{6} 
What is the signum of the npermutation ? (Strang 1980)
The pattern is this.
So to find the signum of φ_{n!}, we subtract one n! − 1 and look at the remainder on division by four. If the remainder is 1 or 2 then the signum is − 1, otherwise it is + 1. For n > 4, the number n! is divisible by four, so n! − 1 leaves a remainder of − 1 on division by four (more properly said, a remainder or 3), and so the signum is + 1. The n = 1 case has a signum of + 1, the n = 2 case has a signum of − 1 and the n = 3 case has a signum of − 1.
Prove these.
Prove that the matrix of the permutation inverse is the transpose of the matrix of the permutation , for any permutation φ.
If φ(i) = j then φ ^{− 1}(j) = i. The result now follows on the observation that P_{φ} has a 1 in entry i,j if and only if φ(i) = j, and has a 1 in entry j,i if and only if φ ^{− 1}(j) = i,
Show that a permutation matrix with m inversions can be row swapped to the identity in m steps. Contrast this with Corollary 4.6.
This does not say that m is the least number of swaps to produce an identity, nor does it say that m is the most. It instead says that there is a way to swap to the identity in exactly m steps.
Let ι_{j} be the first row that is inverted with respect to a prior row and let ι_{k} be the first row giving that inversion. We have this interval of rows.
Swap.
The second matrix has one fewer inversion because there is one fewer inversion in the interval (s vs. s + 1) and inversions involving rows outside the interval are not affected.
Proceed in this way, at each step reducing the number of inversions by one with each row swap. When no inversions remain the result is the identity.
The contrast with Corollary 4.6 is that the statement of this exercise is a "there exists" statement: there exists a way to swap to the identity in exactly m steps. But the corollary is a "for all" statement: for all ways to swap to the identity, the parity (evenness or oddness) is the same.
For any permutation φ let g(φ) be the integer defined in this way.
g(φ) =  ∏  [φ(j) − φ(i)] 
i < j 
(This is the product, over all indices i and j with i < j, of terms of the given form.)
Many authors give this formula as the definition of the signum function.
^{[Index]}
