arose (in 1758) from Lagrange’s precise demonstration of the
principle of Least Action for a particle, and its immediate extension,
on the basis of his new Calculus of Variations, to a system
of connected particles such as might be taken as a representation
of any material system; but here too the same physical as
distinct from mechanical considerations come into play as in
d’Alembert’s principle. (See Dynamics: Analytical.)
It is in the cases of systems whose state is changing so slowly that reactions arising from changing motions can be neglected, that the conditions are by far the simplest. In such systems, whether stationary or in a state of steady motion, the energy depends on the configuration alone, and its mathematical expression can be determined from measurement of the work required for a sufficient number of simple transformations; once it is thus found, all the statical relations of the system are implicitly determined along with it, and the results of all other transformations can be predicted. The general development of such relations is conveniently classed as a separate branch of physics under the name Energetics, first invented by W. J. M. Rankine; but the essential limitations of this method have not always been observed. As regards statical change, the complete specification of a mechanical system is involved in its geometrical configuration and the function expressing its mechanical energy in terms thereof. Systems which have statical energy-functions of the same analytical form behave in corresponding ways, and can serve as models or representations of one another.
Extension to Thermal and Chemical Systems.—This dominant position of the principle of energy, in ordinary statical problems, has in recent times been extended to transformations involving change of physical state or chemical constitution as well as change of geometrical configuration. In this wider field we cannot assert that mechanical (or available) energy is never lost, for it may be degraded into thermal energy; but we can use the principle that on the other hand it can never spontaneously increase. If this were not so, cyclic processes might theoretically be arranged which would continue to supply mechanical power so long as energy of any kind remained in the system; whereas the irregular and uncontrollable character of the molecular motions and strains which constitute thermal energy, in combination with the vast number of the molecules, must place an effectual bar on their unlimited co-ordination. To establish a doctrine of energetics that shall form a sufficient foundation for a theory of the trend of chemical and physical change, we have, therefore, to impart precision to this motion of available energy.
Carnot’s Principle: Entropy.—The whole subject is involved in the new principle contributed to theoretical physics by Sadi Carnot in 1824, in which the far-reaching modern conception of cyclic processes was first scientifically developed. It was shown by Carnot, on the basis of certain axioms, whose theoretical foundations were subsequently corrected and strengthened by Clausius and Lord Kelvin, that a reversible mechanical process, working in a cycle by means of thermal transfers, which takes heat, say H1, into the material system at a given temperature T1, and delivers the part of it not utilized, say H2, at a lower given temperature T2, is more efficient, considered as a working engine, than any other such process, operating between the same two temperatures but not reversible, could be. This relation of inequality involves a definite law of equality, that the mechanical efficiencies of all reversible cyclic processes are the same, whatever be the nature of their operation or the material substances involved in them; that in fact the efficiency is a function solely of the two temperatures at which the cyclically working system takes in and gives out heat. These considerations constitute a fundamental general principle to which all possible slow reversible processes, so far as they concern matter in bulk, must conform in all their stages; its application is almost coextensive with the scope of general physics, the special kinetic theories in which inertia is involved, being excepted. (See Thermodynamics.) If the working system is an ideal gas-engine, in which a perfect gas (known from experience to be a possible state of matter) is passed through the cycle, and if temperature is measured from the absolute zero by the expansion of this gas, then simple direct calculation on the basis of the laws of ideal gases shows that H1/T1 = H2/T2; and as by the conservation of energy the work done is H1 − H2, it follows that the efficiency, measured as the ratio of the work done to the supply of heat, is 1 − T2/T1. If we change the sign of H1 and thus consider heat as positive when it is restored to the system as is H2, the fundamental equation becomes H1/T1 + H2/T2 = 0; and as any complex reversible working system may be considered as compounded in various ways of chains of elementary systems of this type, whose effects are additive, the general proposition follows, that in any reversible complete cyclic change which involves the taking in of heat by the system of which the amount is δH, when its temperature ranges between Tr and Tr + δT, the equation ΣδHr /Tr -0 holds good. Moreover, if the changes are not reversible, the proportion of the heat supply that is utilized for mechanical work will be smaller, so that more heat will be restored to the system, and ΣδHr /Tr or, as it may be expressed, ƒdH/T, must have a larger value, and must thus be positive. The first statement involves further, that for all reversible paths of change of the system from one state C to another state D, the value of ƒdH/T must be the same, because any one of these paths and any other one reversed would form a cycle; whereas for any irreversible path of change between the same states this integral must have a greater value (and so exceed the difference of entropies at the ends of the path). The definite quantity represented by this integral for a reversible path was introduced by Clausius in 1854 (also adumbrated by Kelvin’s investigations about the same time), and was named afterwards by him the increase of the entropy of the system in passing from the state C to the state D. This increase, being thus the same for the unlimited number of possible reversible paths involving independent variation of all its finite co-ordinates, along which the system can pass, can depend only on the terminal states. The entropy belonging to a given state is therefore a function of that state alone, irrespective of the manner in which it has been reached; and this is the justification of the assignment to it of a special name, connoting a property of the system depending on its actual condition and not on its previous history. Every reversible change in an isolated system thus maintains the entropy of that system unaltered; no possible spontaneous change can involve decrease of the entropy; while any defect of reversibility, arising from diffusion of matter or motion in the system, necessarily leads to increase of entropy. For a physical or chemical system only those changes are spontaneously possible which would lead to increase of the entropy; if the entropy is already a maximum for the given total energy, and so incapable of further continuous increase under the conditions imposed upon the system, there must be stable equilibrium.
This definite quantity belonging to a material system, its entropy φ, is thus concomitant with its energy E, which is also a definite function of its actual state by the law of conservation of energy; these, along with its temperature T, and the various co-ordinates expressing its geometrical configuration and its physical and chemical constitution, are the quantities with which the thermodynamics of the system deals. That branch of science develops the consequences involved in just two principles: (i.) that the energy of every isolated system is constant, and (ii.) that its entropy can never diminish; any complication that may be involved arises from complexity in the systems to which these two laws have to be applied.
The General Thermodynamic Equation.—When any physical or chemical system undergoes an infinitesimal change of state, we have δE = δH + δU, where δH is the energy that has been acquired as heat from sources extraneous to the system during the change, and δU is the energy that has been imparted by reversible agencies such as mechanical or electric work. It is, however, not usually possible to discriminate permanently between heat acquired and work imparted, for (unless for isothermal transformations) neither δH nor δU is the exact differential of a function of the constitution of the system and so independent of its previous history, although their sum δE is such; but we can utilize the fact that δH is equal to Tδφ where δφ is such, as has just been seen. Thus E and φ represent properties of the system which, along with