Now
u
2
s
2
{\displaystyle u^{2}s^{2}}
=
S
(
x
1
2
)
n
(
S
(
x
1
)
n
)
2
−
(
S
(
x
1
)
n
)
4
{\displaystyle ={\frac {S(x_{1}^{2})}{n}}\left({\frac {S(x_{1})}{n}}\right)^{2}-\left({\frac {S(x_{1})}{n}}\right)^{4}}
=
(
S
(
x
1
2
)
n
)
2
+
2
S
(
x
1
x
2
)
⋅
S
(
x
1
2
)
n
3
−
S
(
x
1
4
)
n
4
{\displaystyle =\left({\frac {S(x_{1}^{2})}{n}}\right)^{2}+2{\frac {S(x_{1}x_{2})\cdot S(x_{1}^{2})}{n^{3}}}-{\frac {S(x_{1}^{4})}{n^{4}}}}
−
6
x
1
2
x
2
2
n
4
−
{\displaystyle -{\frac {6x_{1}^{2}x_{2}^{2}}{n^{4}}}-}
other terms of odd order which will vanish on summation.
Summing for all values and dividing by the number of cases we get
R
u
2
s
2
σ
u
2
σ
s
2
+
m
1
M
1
=
μ
4
n
2
+
μ
2
2
(
n
−
1
)
n
2
−
μ
4
n
3
−
3
μ
2
2
(
n
−
1
)
n
3
{\displaystyle R_{u^{2}s^{2}}\sigma _{u^{2}}\sigma _{s^{2}}+m_{1}M_{1}={\frac {\mu _{4}}{n^{2}}}+\mu _{2}^{2}{\frac {(n-1)}{n^{2}}}-{\frac {\mu _{4}}{n^{3}}}-3\mu _{2}^{2}{\frac {(n-1)}{n^{3}}}}
,
where
R
u
2
s
2
{\displaystyle R_{u^{2}s^{2}}}
is the correlation between
u
2
{\displaystyle u^{2}}
and
s
2
{\displaystyle s^{2}}
.
R
u
2
s
2
σ
u
2
σ
s
2
+
μ
2
2
(
n
−
1
)
n
2
=
μ
2
2
(
n
−
1
)
n
3
{
3
+
n
−
3
}
=
μ
2
2
(
n
−
1
)
n
2
{\displaystyle R_{u^{2}s^{2}}\sigma _{u^{2}}\sigma _{s^{2}}+\mu _{2}^{2}{\frac {(n-1)}{n^{2}}}=\mu _{2}^{2}{\frac {(n-1)}{n^{3}}}\{3+n-3\}=\mu _{2}^{2}{\frac {(n-1)}{n^{2}}}}
.
Hence
R
u
2
s
2
σ
u
2
σ
s
2
{\displaystyle R_{u^{2}s^{2}}\sigma _{u^{2}}\sigma _{s^{2}}}
or there is no correlation between
u
2
{\displaystyle u^{2}}
and
s
2
{\displaystyle s^{2}}
.
To find the equation representing the frequency distribution of the means of samples of
n
{\displaystyle n}
drawn from a normal population, the mean being expressed in terms of the standard deviation of the sample.
We have
y
=
C
σ
n
−
1
s
n
−
2
e
−
n
s
2
2
σ
2
{\displaystyle y={\frac {C}{\sigma ^{n-1}}}s^{n-2}e^{-{\frac {ns^{2}}{2\sigma ^{2}}}}}
as the equation representing the distribution of
s
{\displaystyle s}
, the standard deviation of a sample of
n
{\displaystyle n}
, when the samples are drawn from a normal population with standard deviation
σ
{\displaystyle \sigma }
.
Now the means of these samples of
n
{\displaystyle n}
are distributed according to the equation
y
=
n
N
s
π
σ
e
−
n
x
2
2
σ
2
{\displaystyle y={\frac {{\sqrt {n}}N}{{\sqrt {s\pi }}\sigma }}e^{-{\frac {nx^{2}}{2\sigma ^{2}}}}}
[ 1]
and we have shown that there is no correlation between
x
{\displaystyle x}
, the distance of the mean of the sample, and
s
{\displaystyle s}
, the standard deviation of the sample.
Now let us suppose
x
{\displaystyle x}
measured in terms of
s
{\displaystyle s}
, i.e. let us find the distribution of
z
=
x
s
{\displaystyle z={\frac {x}{s}}}
.
If we have
y
1
=
ϕ
(
x
)
{\displaystyle y_{1}=\phi (x)}
and
y
2
=
ψ
(
z
)
{\displaystyle y_{2}=\psi (z)}
as the equations representing the frequency of
x
{\displaystyle x}
and of
z
{\displaystyle z}
respectively, then
y
1
d
x
=
y
2
d
z
=
y
2
d
x
s
{\displaystyle y_{1}dx=y_{2}dz=y_{2}{\frac {dx}{s}}}
,
∴
y
2
=
s
y
1
{\displaystyle \therefore y_{2}=sy_{1}}
.
↑ Airy, Theory of Errors of Observations , Part II. § 6.