Numerous proofs have been attempted, almost all of them crude paralogisms. But starting from the following hypotheses we may prove Gauss's law: the error is the result of a very large number of partial and independent errors; each partial error is very small and obeys any law of probability whatever, provided the probability of a positive error is the same as that of an equal negative error. It is clear that these conditions will be often, but not always, fulfilled, and we may reserve the name of accidental for errors which satisfy them.
We see that the method of least squares is not legitimate in every case; in general, physicists are more distrustful of it than astronomers. This is no doubt because the latter, apart from the systematic errors to which they and the physicists are subject alike, have to contend with an extremely important source of error which is entirely accidental—I mean atmospheric undulations. So it is very curious to hear a discussion between a physicist and an astronomer about a method of observation. The physicist, persuaded that one good measurement is worth more than many bad ones, is pre-eminently concerned with the elimination by means of every precaution of the final systematic errors; the astronomer retorts: "But you can only observe a small number of stars, and accidental errors will not disappear."
What conclusion must we draw? Must we continue to use the method of least squares?