# Data distribution

ナビゲーションに移動 検索に移動
General issues of Vaccine
General issues of Travel med.
Virus
Rickettsia
Fungi
Trematode (fluke, distoma)
Cestode (tapeworm)
Medical Zoology

## Standard deviation and Variance

### SD/Variance of Quantitative data ~ mean

Size Mean Standard deviation Variance notes
Population $\displaystyle{ N }$ $\displaystyle{ \mu = \frac{\textstyle \sum_{i=1}^N X_i}{N} }$ $\displaystyle{ \sigma=\sqrt{\frac{\sum_{i=1}^N (X_i-\mu)^2}{N}} }$ $\displaystyle{ \sigma^2=\frac{\sum_{i=1}^N (X_i-\mu)^2}{N} }$
• $\displaystyle{ X_i }$ is each value in population
• Quantitative
• continuous or discrete
Sample $\displaystyle{ n }$ $\displaystyle{ \overline{x} = \frac{\textstyle \sum_{i=1}^n x_i}{n} }$ $\displaystyle{ s=\sqrt{\frac{\sum_{i=1}^n (x_i-\overline{x})^2}{\color{Red}n-1}} }$ $\displaystyle{ s^2=\frac{\sum_{i=1}^n (x_i-\overline{x})^2}{\color{Red}n-1} }$
• $\displaystyle{ x_i }$ is each value in sample
• Quantitative
• continuous or discrete
• $\displaystyle{ \color{Red}n-1 }$ is derived from the degrees of freedom

### SD/Variance of Binomial data ~ proportion

Size Proportion Standard deviation ¶ Variance ¶ notes
population $\displaystyle{ N }$ $\displaystyle{ \pi = \frac{\sum_{i=1}^N X_i}{N} }$

* $\displaystyle{ X_i = 0 }$ or $\displaystyle{ 1 }$

$\displaystyle{ \sigma = \sqrt{\pi (1 - \pi)} }$ $\displaystyle{ \sigma^2 = \pi (1 - \pi) }$
• $\displaystyle{ X_i }$ is each value in population
• Binary
• 0 or 1
sample $\displaystyle{ n }$ $\displaystyle{ p = \frac{\sum_{i=1}^n x_i}{n} }$

* $\displaystyle{ x_i = 0 }$ or $\displaystyle{ 1 }$

\displaystyle{ \begin{align} s & = \sqrt{\frac{n}{n-1} \cdot p (1 - p)} \\ & \approx \sqrt{p (1-p)} \end{align} } \displaystyle{ \begin{align} s^2 & = \frac{n}{n-1} \cdot p (1 - p) \\ & \approx p (1 - p) \end{align} }
• $\displaystyle{ x_i }$ is each value in sample
• Binary
• 0 or 1

¶ How to derive variance and standard deviation of proportion in population:

Definition of variance of values of in population is $\displaystyle{ \frac{\sum_{i=1}^N (X_i - \mu)^2}{N} }$ .

Here, $\displaystyle{ {\color{Green}\mu} }$ is $\displaystyle{ {\color{Green}\frac{\sum_{i=1}^N X_i}{n}} }$ according to its definition.

This is $\displaystyle{ {\color{Green}\pi} }$ itself (refer to the above table).

$\displaystyle{ {\color{Green}\mu} = {\color{Green}\frac{\sum_{i=1}^N X_i}{N}} = {\color{Green}\pi} }$

And when we consider $\displaystyle{ {\color{Red}\frac{\sum_{i=1}^N X_i^2}{N}} }$ , provided that $\displaystyle{ X_i = 0 }$ or $\displaystyle{ 1 }$, it leads:

$\displaystyle{ {\color{Red}\frac{\sum_{i=1}^N X_i^2}{N}} = {\color{Green}\frac{\sum_{i=1}^N X_i}{N}} = {\color{Green}\pi} }$

Thus the variance of population proportion can be calculated as follows:

\displaystyle{ \begin{align} \sigma^2 & = \frac{\sum_{i=1}^N (X_i - {\color{Green}\mu})^2}{N} \\ & = \frac{\sum_{i=1}^N (X_i - {\color{Green}\pi})^2}{N} \\ & = \frac{\sum_{i=1}^N (X_i^2 - 2 {\color{Green}\pi} \cdot X_i + {\color{Green}\pi^2})}{N} \\ & = \frac{\sum_{i=1}^N X_i^2}{N} - 2 {\color{Green}\pi} \cdot \frac{\sum_{i=1}^N X_i}{N} + {\color{Green}\pi^2} \cdot \frac{\sum_{i=1}^N 1}{N}\\ & = {\color{Red}\frac{\sum_{i=1}^N X_i^2}{N}} - 2 {\color{Green}\pi} \cdot {\color{Green}\frac{\sum_{i=1}^N X_i}{N}} + {\color{Green}\pi^2} \cdot \frac{{\color{Orange}\sum_{i=1}^N 1}}{N}\\ & = {\color{Green}\pi} - 2 {\color{Green}\pi} \cdot {\color{Green}\pi} + {\color{Green}\pi^2} \cdot \frac{\color{Orange}N}{N} \\ & = \pi - 2\pi^2 + \pi^2 \\ & = \pi - \pi^2 \\ & = \pi(1-\pi) \end{align} }

Then standard deviation is also obtained:

\displaystyle{ \begin{align} \sigma & = \sqrt{\sigma^2} \\ & = \sqrt{\pi(1-\pi)} \end{align} }

¶ How to derive variance and standard deviation of proportion in sample:

Definition of variance of values in sample is $\displaystyle{ \frac{\sum_{i=1}^n (x_i - \bar x)^2}{n-1} }$ .

This can be transformed into $\displaystyle{ \frac{n}{n-1} \cdot \frac{\sum_{i=1}^n (x_i - \bar x)^2}{n} }$ .

Here, $\displaystyle{ {\color{Green}\bar x} }$ is $\displaystyle{ {\color{Green}\frac{\sum_{i=1}^n x_i}{n}} }$ according to its definition.

This is $\displaystyle{ {\color{Green}p} }$ itself (refer to the above table).

$\displaystyle{ {\color{Green}\bar x} = {\color{Green}\frac{\sum_{i=1}^n x_i}{n}} = {\color{Green}p} }$

And when we consider $\displaystyle{ {\color{Red}\frac{\sum_{i=1}^n x_i^2}{n}} }$ , provided that $\displaystyle{ x_i = 0 }$ or $\displaystyle{ 1 }$, it leads:

$\displaystyle{ {\color{Red}\frac{\sum_{i=1}^n x_i^2}{n}} = {\color{Green}\frac{\sum_{i=1}^n x_i}{n}} = {\color{Green}p} }$

Thus the variance of sample proportion can be calculated as follows:

\displaystyle{ \begin{align} s^2 & = \frac{\sum_{i=1}^n (x_i - {\color{Green}\bar x})^2}{n-1} \\ & = \frac{n}{n-1} \cdot \frac{\sum_{i=1}^n (x_i - {\color{Green}\bar x})^2}{n} \\ & = \frac{n}{n-1} \cdot \frac{\sum_{i=1}^n (x_i - {\color{Green}p})^2}{n} \\ & = \frac{n}{n-1} \cdot \frac{\sum_{i=1}^n (x_i^2 - 2 {\color{Green}p} \cdot x_i + {\color{Green}p^2})}{n} \\ & = \frac{n}{n-1} \left ( \cdot \frac{\sum_{i=1}^n x_i^2}{n} - 2 {\color{Green}p} \cdot \frac{\sum_{i=1}^n x_i}{n} + {\color{Green}p^2} \cdot \frac{\sum_{i=1}^n 1}{n} \right ) \\ & = \frac{n}{n-1} \left ( {\color{Red}\frac{\sum_{i=1}^n x_i^2}{n}} - 2 {\color{Green}p} \cdot {\color{Green}\frac{\sum_{i=1}^n x_i}{n}} + {\color{Green}p^2} \cdot \frac{{\color{Orange}\sum_{i=1}^n 1}}{n} \right ) \\ & = \frac{n}{n-1} \left ( {\color{Green}p} - 2 {\color{Green}p} \cdot {\color{Green}p} + {\color{Green}p^2} \cdot \frac{\color{Orange}n}{n} \right ) \\ & = \frac{n}{n-1} \left ( p - 2p^2 + p^2 \right ) \\ & = \frac{n}{n-1} \left ( p - p^2 \right ) \\ & = \frac{n}{n-1} \cdot p(1-p) \end{align} }

Here, if $\displaystyle{ n }$ is large enough, we can ignore $\displaystyle{ \frac{n}{n-1} }$ from the calculation.

$\displaystyle{ s^2 \approx p(1-p) }$

Then standard deviation is also obtained:

\displaystyle{ \begin{align} s & = \sqrt{s^2} \\ & = \sqrt{\frac{n}{n-1} \cdot p(1-p)} \\ & \approx \sqrt{p(1-p)} \end{align} }

#### Sum of squares

In ANOVA, sums of squares in total and in groups are compared.

Sum of squares is the numerator of variance.

Variance $\displaystyle{ = \frac{\sum_{i=1}^n (x_i - \bar x)^2}{n-1} }$
Sum of square $\displaystyle{ = {\sum_{i=1}^n (x_i - \bar x)^2} }$

Here we think a case to divide the observations into $\displaystyle{ k }$ groups.

$\displaystyle{ \begin{Bmatrix} x_1, x_2, \cdots ,x_n \end{Bmatrix} }$$\displaystyle{ k }$ groups $\displaystyle{ \begin{Bmatrix} x_1, x_1, \cdots \cdots, x_a & (sample\ size=a)\\ x_{a+1}, x_{a+2}, \cdots, x_b & (sample\ size=b)\\ \vdots & \vdots \\ \cdots \cdots x_{n-1}, x_n & (sample\ size=z) \end{Bmatrix} }$

The sum of square of the total observations can be transformed as follows:

\displaystyle{ \begin{align} {\sum_{i=1}^n (x_i - \bar x)^2} & = \\ \end{align} }

## Standard Error

If we repeated infinite times of sampling from the population with sample size of $\displaystyle{ n }$ every time ($\displaystyle{ n }$ is large enough), no matter what the population distribution was, those infinite number of sample means follow normal distribution with mean identical to population mean $\displaystyle{ \mu }$, and variance derived from population variance $\displaystyle{ \frac{\sigma^2}{n} }$ (not population variance $\displaystyle{ \sigma^2 }$itself). This is central limit theorem.

 Derivation of $\displaystyle{ \frac{\sigma^2}{n} }$ needs far advanced mathematics like Maclaurin expansion, characteristic function or moment-generating function.

Hence the standard deviation of sample means is derived from the square root of its variance —— $\displaystyle{ \sqrt{\frac{\sigma^2}{n}} = \frac{\sigma}{\sqrt{n}} }$.

This standard deviation of sample means $\displaystyle{ \frac{\sigma}{\sqrt{n}} }$ is defined as Standard error.

In reality, God knows the population mean $\displaystyle{ \mu }$ and population standard deviation $\displaystyle{ \sigma }$, thus only way to utilize standard error is to assume sample standard deviation would be close to population standard deviation as follows:

$\displaystyle{ Standard\ error \approx \frac{s}{\sqrt{n}} }$, where $\displaystyle{ s }$ = sample standard deviation

Standard error notes
mean \displaystyle{ \begin{align} SEM & = \frac{\sigma}{\sqrt{N}} \\ & \approx \frac{s}{\sqrt{n}} \end{align} }
• $\displaystyle{ \sigma }$ is population standard deviation
• $\displaystyle{ N }$ is population size
• $\displaystyle{ \pi }$ is population proportion
• $\displaystyle{ s }$ is sample standard deviation
• $\displaystyle{ n }$ is sample size
• $\displaystyle{ p }$ is sample proportion
proportion \displaystyle{ \begin{align} SE_p & = \frac{\sigma}{\sqrt{N}} = \sqrt{\frac{\pi (1-\pi)}{N}} \\ & \approx \frac{s}{\sqrt{n}} = \sqrt{\frac{p (1-p)}{n}} \end{align} }

## Standard Error and Confidence Interval

### When sample size is large enough and assumed to follow normal distribution

According to Central Limit Theorem, distribution of sample means follow normal distribution with mean of $\displaystyle{ \sigma }$ (population mean) and standard deviation of $\displaystyle{ \frac{\sigma}{\sqrt{n}} }$.

The mean of one single sample will lie somewhere within the distribution of sample means around their mean = $\displaystyle{ \sigma }$ (population mean!) with standard deviation of $\displaystyle{ \frac{\sigma}{\sqrt{n}} }$.

As a simple rule, in normal distribution, each range of ±$\displaystyle{ k }$ SD contains the following proportion total values.

±$\displaystyle{ k }$ SD Proportion
±1　　SD 68.2%
±1.96 SD 95　%
±2　　SD 95.4%
±2.58 SD 99　%
±3　　SD 99.7%

We cannot estimate how far a single sample mean $\displaystyle{ \bar{x} }$ is from the true mean of sample means = population mean $\displaystyle{ \sigma }$,

but we can estimate the probability that a certain range of the distribution of a single sample mean contains the true mean of sample means = population mean $\displaystyle{ \sigma }$ according to above table.

The standard deviation of sample means = Standard Error is $\displaystyle{ \frac{\sigma}{\sqrt{n}} }$,

and it can be approximate by using the standard deviation of a single sample mean $\displaystyle{ s }$ as $\displaystyle{ \frac{s}{\sqrt{n}} }$.

Thus, $\displaystyle{ \bar{x}\ \pm\ k \frac{s}{\sqrt{n}} }$ is the range of distribution of a single sample mean $\displaystyle{ \bar{x} }$ and its corresponding proportion is the probability that the range contains $\displaystyle{ \sigma }$.

$\displaystyle{ \bar{x}\ \pm\ 1.96 \frac{s}{\sqrt{n}} }$ is 95% Confidence Interval, $\displaystyle{ \bar{x}\ \pm\ 2.58 \frac{s}{\sqrt{n}} }$ is 99% Confidence Interval.

### When sample size is small (roughly <30) and assumed to follow t distribution

We have to refer to t-distribution table instead of normal distribution table (Z table),

as well as take into account of degrees of freedom, $\displaystyle{ n-1 }$.

Find out the relevant coefficient $\displaystyle{ k }$ of $\displaystyle{ \bar{x}\ \pm\ k \frac{s}{\sqrt{n}} }$ from t-distribution table by using desired CI range and degrees of freedom.

## Probability, Likelihood

Statistics is an attempt to estimate population through sample.

A population always follows some kind of distribution, i.e., follows some kind of probability distribution. But parameters of a population – e.g., mean and standard deviation in normal distribution, success probability and number of trials in binary distribution, location and scale in logistic distribution, etc. – are what God knows.

And each distribution has each relevant equation to describe its probability distribution, and the equation is derived from parameters.

### Probability

When a sample is derived from a population, parameters of the sample can be calculated, and they are the estimates of parameters of the population.

But God knows the true parameters of the population, and parameter of the sample always have random error.

An equation relevant to each distribution is derived from the parameters of the sample, and a value the equation makes also has random error.

And that value the equation makes is probability. Probability is the chance that a given observed data is included in the distribution. Or more specifically, it is conditional probability, because a given observed data (= condition) gives a probability of the existing of the data.

### Likelihood

When a sample is derived from a population, observations in the sample should follow the distribution God knows with the parameters God knows, which are impossible to know.

And there are multiple possibility of sets of parameters which observed data in the sample can follow, and different sets of parameters have different chances to exist.

Those chances are likelihood. A set of parameters can be followed by observed data in the sample with very low chance, another set of parameters can be followed by the sample with relatively high chance, and yet another set of parameters can be followed by the sample with the highest chance.

As a natural sense, the highest chance of set of parameters, i.e., the most likely set of parameters (the parameters with the maximum likelihood) is taken into account and is used to make the relevant equation. That is the maximum likelihood estimation method.

On the contrary to that the above-mentioned (conditional) probability fixes the parameters (hypothesis) and varies observations (data), likelihood fixes observations (data) and varies parameters (hypothesis).