Jump to content

Elementary Principles in Statistical Mechanics/Chapter XII

From Wikisource
1542409Elementary Principles in Statistical MechanicsChapter XII. On the motion of systems and ensembles of systems through long periods of time.Josiah Willard Gibbs

CHAPTER XII.

ON THE MOTION OF SYSTEMS AND ENSEMBLES OF SYSTEMS THROUGH LONG PERIODS OF TIME.

An important question which suggests itself in regard to any case of dynamical motion is whether the system considered will return in the course of time to its initial phase, or, if it will not return exactly to that phase, whether it will do so to any required degree of approximation in the course of a sufficiently long time. To be able to give even a partial answer to such questions, we must know something in regard to the dynamical nature of the system. In the following theorem, the only assumption in this respect is such as we have found necessary for the existence of the canonical distribution.

If we imagine an ensemble of identical systems to be distributed with a uniform density throughout any finite extension-in-phase, the number of the systems which leave the extension-in-phase and will not return to it in the course of time is less than any assignable fraction of the whole number; provided, that the total extension-in-phase for the systems considered between two limiting values of the energy is finite, these limiting values being less and greater respectively than any of the energies of the first-mentioned extension-in-phase.

To prove this, we observe that at the moment which we call initial the systems occupy the given extension-in-phase. It is evident that some systems must leave the extension immediately, unless all remain in it forever. Those systems which leave the extension at the first instant, we shall call the front of the ensemble. It will be convenient to speak of this front as generating the extension-in-phase through which it passes in the course of time, as in geometry a surface is said to generate the volume through which it passes. In equal times the front generates equal extensions in phase. This is an immediate consequence of the principle of conservation of extension-in-phase unless indeed we prefer to consider it as a slight variation in the expression of that principle. For in two equal short intervals of time let the extensions generated be and . (We make the intervals short simply to avoid the complications in the enunciation or interpretation of the principle which would arise when the same extension-in-phase is generated more than once in the interval considered.) Now if we imagine that at a given instant systems are distributed throughout the extension , it is evident that the same systems will after a certain time occupy the extension , which is therefore equal to in virtue of the principle cited. The front of the ensemble, therefore, goes on generating equal extensions in equal times. But these extensions are included in a finite extension, viz., that bounded by certain limiting values of the energy. Sooner or later, therefore, the front must generate phases which it has before generated. Such second generation of the same phases must commence with the initial phases. Therefore a portion at least of the front must return to the original extension-in-phase. The same is of course true of the portion of the ensemble which follows that portion of the front through the same phases at a later time.

It remains to consider how large the portion of the ensemble is, which will return to the original extension-in-phase. There can be no portion of the given extension-in-phase, the systems of which leave the extension and do not return. For we can prove for any portion of the extension as for the whole, that at least a portion of the systems leaving it will return.

We may divide the given extension-in-phase into parts as follows. There may be parts such that the systems within them will never pass out of them. These parts may indeed constitute the whole of the given extension. But if the given extension is very small, these parts will in general be nonexistent. There may be parts such that systems within them will all pass out of the given extension and all return within it. The whole of the given extension-in-phase is made up of parts of these two kinds. This does not exclude the possibility of phases on the boundaries of such parts, such that systems starting with those phases would leave the extension and never return. But in the supposed distribution of an ensemble of systems with a uniform density-in-phase, such systems would not constitute any assignable fraction of the whole number.

These distinctions may be illustrated by a very simple example. If we consider the motion of a rigid body of which one point is fixed, and which is subject to no forces, we find three cases. (1) The motion is periodic. (2) The system will never return to its original phase, but will return infinitely near to it. (3) The system will never return either exactly or approximately to its original phase. But if we consider any extension-in-phase, however small, a system leaving that extension will return to it except in the case called by Poinsot 'singular,' viz., when the motion is a rotation about an axis lying in one of two planes having a fixed position relative to the rigid body. But all such phases do not constitute any true extension-in-phase in the sense in which we have defined and used the term.[1]

In the same way it may be proved that the systems in a canonical ensemble which at a given instant are contained within any finite extension-in-phase will in general return to that extension-in-phase, if they leave it, the exceptions, i. e., the number which pass out of the extension-in-phase and do not return to it, being less than any assignable fraction of the whole number. In other words, the probability that a system taken at random from the part of a canonical ensemble which is contained within any given extension-in-phase, will pass out of that extension and not return to it, is zero.

A similar theorem may be enunciated with respect to a microcanonical ensemble. Let us consider the fractional part of such an ensemble which lies within any given limits of phase. This fraction we shall denote by . It is evidently constant in time since the ensemble is in statistical equilibrium. The systems within the limits will not in general remain the same, but some will pass out in each unit of time while an equal number come in. Some may pass out never to return within the limits. But the number which in any time however long pass out of the limits never to return will not bear any finite ratio to the number within the limits at a given instant. For, if it were otherwise, let denote the fraction representing such ratio for the time . Then, in the time , the number which pass out never to return will bear the ratio to the whole number in the ensemble, and in a time exceeding the number which pass out of the limits never to return would exceed the total number of systems in the ensemble. The proposition is therefore proved.

This proof will apply to the cases before considered, and may be regarded as more simple than that which was given. It may also be applied to any true case of statistical equilibrium. By a true case of statistical equilibrium is meant such as may be described by giving the general value of the probability that an unspecified system of the ensemble is contained within any given limits of phase. [2]

Let us next consider whether an ensemble of isolated systems has any tendency in the course of time toward a state of statistical equilibrium.

There are certain functions of phase which are constant in time. The distribution of the ensemble with respect to the values of these functions is necessarily invariable, that is, the number of systems within any limits which can be specified in terms of these functions cannot vary in the course of time. The distribution in phase which without violating this condition gives the least value of the average index of probability of phase () is unique, and is that in which the index of probability () is a function of the functions mentioned.[3] It is therefore a permanent distribution,[4] and the only permanent distribution consistent with the invariability of the distribution with respect to the functions of phase which are constant in time.

It would seem, therefore, that we might find a sort of measure of the deviation of an ensemble from statistical equilibrium in the excess of the average index above the minimum which is consistent with the condition of the invariability of the distribution with respect to the constant functions of phase. But we have seen that the index of probability is constant in time for each system of the ensemble. The average index is therefore constant, and we find by this method no approach toward statistical equilibrium in the course of time.

Yet we must here exercise great caution. One function may approach indefinitely near to another function, while some quantity determined by the first does not approach the corresponding quantity determined by the second. A line joining two points may approach indefinitely near to the straight line joining them, while its length remains constant. We may find a closer analogy with the case under consideration in the effect of stirring an incompressible liquid.[5] In space of dimensions the case might be made analytically identical with that of an ensemble of systems of degrees of freedom, but the analogy is perfect in ordinary space. Let us suppose the liquid to contain a certain amount of coloring matter which does not affect its hydrodynamic properties. Now the state in which the density of the coloring matter is uniform, i. e., the state of perfect mixture, which is a sort of state of equilibrium in this respect that the distribution of the coloring matter in space is not affected by the internal motions of the liquid, is characterized by a minimum value of the average square of the density of the coloring matter. Let us suppose, however, that the coloring matter is distributed with a variable density. If we give the liquid any motion whatever, subject only to the hydrodynamic law of incompressibility, it may be a steady flux, or it may vary with the time, the density of the coloring matter at any same point of the liquid will be unchanged, and the average square of this density will therefore be unchanged. Yet no fact is more familiar to us than that stirring tends to bring a liquid to a state of uniform mixture, or uniform densities of its components, which is characterized by minimum values of the average squares of these densities. It is quite true that in the physical experiment the result is hastened by the process of diffusion, but the result is evidently not dependent on that process.

The contradiction is to be traced to the notion of the density of the coloring matter, and the process by which this quantity is evaluated. This quantity is the limiting ratio of the quantity of the coloring matter in an element of space to the volume of that element. Now if we should take for our elements of volume, after any amount of stirring, the spaces occupied by the same portions of the liquid which originally occupied any given system of elements of volume, the densities of the coloring matter, thus estimated, would be identical with the original densities as determined by the given system of elements of volume. Moreover, if at the end of any finite amount of stirring we should take our elements of volume in any ordinary form but sufficiently small, the average square of the density of the coloring matter, as determined by such element of volume, would approximate to any required degree to its value before the stirring. But if we take any element of space of fixed position and dimensions, we may continue the stirring so long that the densities of the colored liquid estimated for these fixed elements will approach a uniform limit, viz., that of perfect mixture.

The case is evidently one of those in which the limit of a limit has different values, according to the order in which we apply the processes of taking a limit. If treating the elements of volume as constant, we continue the stirring indefinitely, we get a uniform density, a result not affected by making the elements as small as we choose; but if treating the amount of stirring as finite, we diminish indefinitely the elements of volume, we get exactly the same distribution in density as before the stirring, a result which is not affected by continuing the stirring as long as we choose. The question is largely one of language and definition. One may perhaps be allowed to say that a finite amount of stirring will not affect the mean square of the density of the coloring matter, but an infinite amount of stirring may be regarded as producing a condition in which the mean square of the density has its minimum value, and the density is uniform. We may certainly say that a sensibly uniform density of the colored component may be produced by stirring. Whether the time required for this result would be long or short depends upon the nature of the motion given to the liquid, and the fineness of our method of evaluating the density.

All this may appear more distinctly if we consider a special case of liquid motion. Let us imagine a cylindrical mass of liquid of which one sector of 90° is black and the rest white. Let it have a motion of rotation about the axis of the cylinder in which the angular velocity is a function of the distance from the axis. In the course of time the black and the white parts would become drawn out into thin ribbons, which would be wound spirally about the axis. The thickness of these ribbons would diminish without limit, and the liquid would therefore tend toward a state of perfect mixture of the black and white portions. That is, in any given element of space, the proportion of the black and white would approach 1:3 as a limit. Yet after any finite time, the total volume would be divided into two parts, one of which would consist of the white liquid exclusively, and the other of the black exclusively. If the coloring matter, instead of being distributed initially with a uniform density throughout a section of the cylinder, were distributed with a density represented by any arbitrary function of the cylindrical coördinates , and , the effect of the same motion continued indefinitely would be an approach to a condition in which the density is a function of and alone. In this limiting condition, the average square of the density would be less than in the original condition, when the density was supposed to vary with , although after any finite time the average square of the density would be the same as at first.

If we limit our attention to the motion in a single plane perpendicular to the axis of the cylinder, we have something which is almost identical with a diagrammatic representation of the changes in distribution in phase of an ensemble of systems of one degree of freedom, in which the motion is periodic, the period varying with the energy, as in the case of a pendulum swinging in a circular arc. If the coördinates and momenta of the systems are represented by rectangular coördinates in the diagram, the points in the diagram representing the changing phases of moving systems, will move about the origin in closed curves of constant energy. The motion will be such that areas bounded by points representing moving systems will be preserved. The only difference between the motion of the liquid and the motion in the diagram is that in one case the paths are circular, and in the other they differ more or less from that form.

When the energy is proportional to the curves of constant energy are circles, and the period is independent of the energy. There is then no tendency toward a state of statistical equilibrium. The diagram turns about the origin without change of form. This corresponds to the case of liquid motion, when the liquid revolves with a uniform angular velocity like a rigid solid.

The analogy between the motion of an ensemble of systems in an extension-in-phase and a steady current in an incompressible liquid, and the diagrammatic representation of the case of one degree of freedom, which appeals to our geometrical intuitions, may be sufficient to show how the conservation of density in phase, which involves the conservation of the average value of the index of probability of phase, is consistent with an approach to a limiting condition in which that average value is less. We might perhaps fairly infer from such considerations as have been adduced that an approach to a limiting condition of statistical equilibrium is the general rule, when the initial condition is not of that character. But the subject is of such importance that it seems desirable to give it farther consideration.

Let us suppose that the total extension-in-phase for the kind of system considered to be divided into equal elements () which are very small but not infinitely small. Let us imagine an ensemble of systems distributed in this extension in a manner represented by the index of probability , which is an arbitrary function of the phase subject only to the restriction expressed by equation (46) of Chapter I. We shall suppose the elements to be so small that may in general be regarded as sensibly constant within any one of them at the initial moment. Let the path of a system be defined as the series of phases through which it passes.

At the initial moment () a certain system is in an element of extension . Subsequently, at the time , the same system is in the element . Other systems which were at first in will at the time be in , but not all, probably. The systems which were at first in will at the time occupy an extension-in-phase exactly as large as at first. But it will probably be distributed among a very great number of the elements () into which we have divided the total extension-in-phase. If it is not so, we can generally take a later time at which it will be so. There will be exceptions to this for particular laws of motion, but we will confine ourselves to what may fairly be called the general case. Only a very small part of the systems initially in will be found in at the time , and those which are found in at that time were at the initial moment distributed among a very large number of elements .

What is important for our purpose is the value of , the index of probability of phase in the element at the time . In the part of occupied by systems which at the time were in the value of will be the same as its value in at the time , which we shall call . In the parts of occupied by systems which at were in elements very near to we may suppose the value of to vary little from . We cannot assume this in regard to parts of occupied by systems which at were in elements remote from . We want, therefore, some idea of the nature of the extension-in-phase occupied at by the systems which at will occupy . Analytically, the problem is identical with finding the extension occupied at by the systems which at occupied . Now the systems in which lie on the same path as the system first considered, evidently arrived at at nearly the same time, and must have left at nearly the same time, and therefore at were in or near . We may therefore take as the value for these systems. The same essentially is true of systems in which lie on paths very close to the path already considered. But with respect to paths passing through and , but not so close to the first path, we cannot assume that the time required to pass from to is nearly the same as for the first path. The difference of the times required may be small in comparison with , but as this interval can be as large as we choose, the difference of the times required in the different paths has no limit to its possible value. Now if the case were one of statistical equilibrium, the value of would be constant in any path, and if all the paths which pass through also pass through or near , the value of throughout will vary little from . But when the case is not one of statistical equilibrium, we cannot draw any such conclusion. The only conclusion which we can draw with respect to the phase at of the systems which at are in is that they are nearly on the same path.

Now if we should make a new estimate of indices of probability of phase at the time , using for this purpose the elements ,—that is, if we should divide the number of systems in , for example, by the total number of systems, and also by the extension-in-phase of the element, and take the logarithm of the quotient, we would get a number which would be less than the average value of for the systems within based on the distribution in phase at the time .[6] Hence the average value of for the whole ensemble of systems based on the distribution at will be less than the average value based on the distribution at .

We must not forget that there are exceptions to this general rule. These exceptions are in cases in which the laws of motion are such that systems having small differences of phase will continue always to have small differences of phase.

It is to be observed that if the average index of probability in an ensemble may be said in some sense to have a less value at one time than at another, it is not necessarily priority in time which determines the greater average index. If a distribution, which is not one of statistical equilibrium, should be given for a time , and the distribution at an earlier time should be defined as that given by the corresponding phases, if we increase the interval leaving fixed and taking at an earlier and earlier date, the distribution at will in general approach a limiting distribution which is in statistical equilibrium. The determining difference in such cases is that between a definite distribution at a definite time and the limit of a varying distribution when the moment considered is carried either forward or backward indefinitely.[7]

But while the distinction of prior and subsequent events may be immaterial with respect to mathematical fictions, it is quite otherwise with respect to the events of the real world. It should not be forgotten, when our ensembles are chosen to illustrate the probabilities of events in the real world, that while the probabilities of subsequent events may often be determined from the probabilities of prior events, it is rarely the case that probabilities of prior events can be determined from those of subsequent events, for we are rarely justified in excluding the consideration of the antecedent probability of the prior events.

It is worthy of notice that to take a system at random from an ensemble at a date chosen at random from several given dates, , , etc., is practically the same thing as to take a system at random from the ensemble composed of all the systems of the given ensemble in their phases at the time , together with the same systems in their phases at the time , etc. By Theorem VIII of Chapter XI this will give an ensemble in which the average index of probability will be less than in the given ensemble, except in the case when the distribution in the given ensemble is the same at the times , , etc. Consequently, any indefiniteness in the time in which we take a system at random from an ensemble has the practical effect of diminishing the average index of the ensemble from which the system may be supposed to be drawn, except when the given ensemble is in statistical equilibrium.


  1. An ensemble of systems distributed in phase is a less simple and elementary conception than a single system. But by the consideration of suitable ensembles instead of single systems, we may get rid of the inconvenience of having to consider exceptions formed by particular cases of the integral equations of motion, these cases simply disappearing when the ensemble is substituted for the single system as a subject of study. This is especially true when the ensemble is distributed, as in the case called canonical, throughout an extension-in-phase. In a less degree it is true of the microcanonical ensemble, which does not occupy any extension-in-phase, (in the sense in which we have used the term,) although it is convenient to regard it as a limiting case with respect to ensembles which do, as we thus gain for the subject some part of the analytical simplicity which belongs to the theory of ensembles which occupy true extensions-in-phase.
  2. An ensemble in which the systems are material points constrained to move in vertical circles, with just enough energy to carry them to the highest points, cannot afford a true example of statistical equilibrium. For any other value of the energy than the critical value mentioned, we might in various ways describe an ensemble in statistical equilibrium, while the same language applied to the critical value of the energy would fail to do so. Thus, if we should say that the ensemble is so distributed that the probability that a system is in any given part of the circle is proportioned to the time which a single system spends in that part, motion in either direction being equally probable, we should perfectly define a distribution in statistical equilibrium for any value of the energy except the critical value mentioned above, but for this value of the energy all the probabilities in question would vanish unless the highest point is included in the part of the circle considered, in which case the probability is unity, or forms one of its limits, in which case the probability is indeterminate. Compare the foot-note on page 118.

    A still more simple example is afforded by the uniform motion of a material point in a straight line. Here the impossibility of statistical equilibrium is not limited to any particular energy, and the canonical distribution as well as the microcanonical is impossible.

    These examples are mentioned here in order to show the necessity of caution in the application of the above principle, with respect to the question whether we have to do with a true case of statistical equilibrium.

    Another point in respect to which caution must be exercised is that the part of an ensemble of which the theorem of the return of systems is asserted should be entirely defined by limits within which it is contained, and not by any such condition as that a certain function of phase shall have a given value. This is necessary in order that the part of the ensemble which is considered should be any assignable fraction of the whole. Thus, if we have a canonical ensemble consisting of material points in vertical circles, the theorem of the return of systems may be applied to a part of the ensemble defined as contained in a given part of the circle. But it may not be applied in all cases to a part of the ensemble defined as contained in a given part of the circle and having a given energy. It would, in fact, express the exact opposite of the truth when the given energy is the critical value mentioned above.

  3. See Chapter XI, Theorem IV.
  4. See Chapter IV, sub init.
  5. By liquid is here meant the continuous body of theoretical hydrodynamics, and not anything of the molecular structure and molecular motions of real liquids.
  6. See Chapter XI, Theorem IX.
  7. One may compare the kinematical truism that when two points are moving with uniform velocities, (with the single exception of the case where the relative motion is zero,) their mutual distance at any definite time is less than for , or .