add to the (n+1)th row; by , and add to the (n+2)th row; by and add to the (n+3)rd row, &c. C then becomes
and all the elements of D become zero. Now by the expansion theorem the determinant becomes
We thus obtain for the product a determinant of order . We may say that, in the resulting determinant, the element in the i th row and kth column is obtained by multiplying the elements in the kth row of the first determinant severally by the elements in the i th row of the second, and has the expression
,
and we obtain other expressions by transforming either or both determinants so as to read by columns as they formerly did by rows.
Remark.—In particular the square of a determinant is a determinant of the same order such that ; it is for this reason termed symmetrical.
The Adjoint or Reciprocal Determinant arises from by substituting for each element the corresponding minor so as to form . If we form the product by the theorem for the multiplication of determinants we find that the element in the i th row and kth column of the product is
,
the value of which is zero when is different from , whilst it has the value when . Hence the product determinant has the principal diagonal elements each equal to and the remaining elements zero. Its value is therefore and we have the identity
or .
It can now be proved that the first minor of the adjoint determinant, say is equal to .
From the equations
we derive
and thence
and comparison of the first and third systems yields
.
In general it can be proved that any minor of order of the adjoint is equal to the complementary of the corresponding minor of the original multiplied by the (p – 1)th power of the original determinant.
Theorem.—The adjoint determinant is the (n – 1)th power of the original determinant. The adjoint determinant will be seen subsequently to present itself in the theory of linear equations and in the theory of linear transformation.
Determinants of Special Forms.—It was observed above that the square of a determinant when expressed as a determinant of the same order is such that its elements have the property expressed by . Such determinants are called symmetrical. It is easy to see that the adjoint determinant is also symmetrical, viz. such that , for the determinant got by suppressing the i th row and kth column differs only by an interchange of rows and columns from that got by suppressing the kth row and i th column. If any symmetrical determinant vanish and be bordered as shown below
it is a perfect square when considered as a function of . For since , with similar relations, we have a number of relations similar to , and either or for all different values of and . Now the determinant has the value
in general, and hence by substitution
A skew symmetric determinant has and for all values of and . Such a determinant when of uneven degree vanishes, for if we multiply each row by we multiply the determinant by , and the effect of this is otherwise merely to transpose the determinant so that it reads by rows as it formerly did by columns, an operation which we know leaves the determinant unaltered. Hence or . When a skew symmetric determinant is of even degree it is a perfect square. This theorem is due to Cayley, and reference may be made to Salmon’s Higher Algebra, 4th ed. Art. 39. In the case of the determinant of order 4 the square root is
.
A skew determinant is one which is skew symmetric in all respects, except that the elements of the leading diagonal are not all zero. Such a determinant is of importance in the theory of orthogonal substitution. In the theory of surfaces we transform from one set of three rectangular axes to another by the substitutions
where . This relation implies six equations between the coefficients, so that only three of them are independent. Further we find
and the problem is to express the nine coefficients in terms of three independent quantities.
In general in space of dimensions we have substitutions similar to
,
and we have to express the coefficients in terms of independent quantities; which must be possible, because
where and for all values of and . There are then quantities . Let the determinant of the b’s be and , the minor corresponding to . We can eliminate the quantities and obtain relations
and from these another equivalent set
and now writing
we have a transformation which is orthogonal, because and the elements , are functions of the independent quantities . We may therefore form an orthogonal transformation in association with every skew determinant which has its leading diagonal elements unity, for the quantities are clearly arbitrary.
For the second order we may take
,
and the adjoint determinant is the same; hence
Similarly, for the order 3, we take
and the adjoint is
,
leading to the orthogonal substitution
.
Functional determinants were first investigated by Jacobi in a work De Determinantibus Functionalibus. Suppose dependent variables , each of which is a function of independent variables , so that . From the differential coefficients of the y’s with regard to the x’s we form the functional determinant