「Statistical test」の版間の差分

提供: Vaccipedia | Resources for Vaccines, Tropical medicine and Travel medicine
ナビゲーションに移動 検索に移動
82行目: 82行目:
 
::where <math>\bar{d}</math> is the mean of differences of paired observations
 
::where <math>\bar{d}</math> is the mean of differences of paired observations
 
|rowspan="3" style="vertical-align:top"|
 
|rowspan="3" style="vertical-align:top"|
*'''Wilcoxon rank sum test'''
+
*'''Wilcoxon rank sum test'''<br>='''Mann-Whitney test'''
**AKA '''Mann-Whitney test'''
+
::<math>H_0</math> is distribution of outcomes in both two population'''s''' are the same
 +
:#To rank whole combined observations of two groups
 +
:#To separate back the ranks into two groups
 +
:#To look up ''critical range'' relevant to '''sum of ranks''' in '''the group of smaller number of observation'''
 
|rowspan="3" style="vertical-align:top"|
 
|rowspan="3" style="vertical-align:top"|
 
*'''Wilcoxon signed rank test'''
 
*'''Wilcoxon signed rank test'''
**here 'signed' means 'take into account signs of differences of paired data'
+
::
 +
::here 'signed' means 'take into account signs of differences of paired data'
 
|-
 
|-
 
|''Small sample size <30 in a group''
 
|''Small sample size <30 in a group''
111行目: 115行目:
 
|style="vertical-align:top"|
 
|style="vertical-align:top"|
 
*'''Kruskall-Wallis test'''
 
*'''Kruskall-Wallis test'''
 +
::<math>H_0</math> is distribution of outcomes in all population'''s''' are the same
 +
:#To rank whole combined observations of all groups
 +
:#To separate back the ranks into original groups
 +
:#To make sum of ranks in each group
 +
::<math>H = \frac{n-1}{n} \sum_{i=1}^k \frac{n_i(\bar{R}-E_R)}{s^2}</math>
 +
::<math>H</math> is Kruskal-Wallis statistics
 +
::<math>n_i</math> is number of observations in group <math>i</math>
 +
::<math>\bar{R}</math> is the mean of rank sum in group <math>i</math>
 +
::<math>E_R</math> is expected value of the rankings
 +
::<math>s^2</math> is the variance of rank
 +
::To look up ''critical values'' relevant to '''sum of ranks''' in '''the group of smaller number of observation'''
 
|style="vertical-align:top"|
 
|style="vertical-align:top"|
 
<nowiki>*</nowiki>needs try to transform data into parametric (e.g., logarithmic), or other considerations
 
<nowiki>*</nowiki>needs try to transform data into parametric (e.g., logarithmic), or other considerations
 
|}
 
|}

2022年12月11日 (日) 23:40時点における版

Basics & Definition
Epidemiology
Odds in statistics and Odds in a horse race
Collider bias
Data distribution
Statistical test
Regression model
Multivariate analysis
Marginal effects
Prediction and decision
Table-related commands in STATA
Missing data and imputation

Comparing Proportions

Independent samples
(Unpaired in case of two)
Dependent samples
(Paired in case of two)
2 proportions
  • Z test
[math]\displaystyle{ \begin{align} z & = \frac{p_1-p_2}{SE_{pooled(p_1-p_2)}} \\ & = \frac{p_1-p_2}{\sqrt{\frac{\bar{p}(1-\bar{p})}{n_1}+\frac{\bar{p}(1-\bar{p})}{n_2}}} \end{align} }[/math]
≥ 3 proportions Enough large sample
  • [math]\displaystyle{ \chi^2 }[/math] test
[math]\displaystyle{ \chi^2 = \sum \frac{(O - E)^2}{E} }[/math]
[math]\displaystyle{ O }[/math] = observed values
[math]\displaystyle{ E }[/math] = expected values
  • McNemar's [math]\displaystyle{ \chi^2 }[/math] test
[math]\displaystyle{ \begin{align} & McNemar's\ \chi^2 \\ & = \frac{(n_1-n_2)^2}{n_1+n_2} \end{align} }[/math]
[math]\displaystyle{ n_i }[/math] = number of observations in discordant pair
Testing linear association
  • [math]\displaystyle{ \chi^2 }[/math] trend test
[math]\displaystyle{ \begin{align} & \chi^2 trend \\ & = \frac{(\bar{x_1}-\bar{x_2})^2}{s^2(\frac{1}{n_1}+\frac{1}{n_2})} \\ & s = \sqrt{\sum \frac{(x_i-\bar{x_i})^2}{n-1}} \end{align} }[/math]
[math]\displaystyle{ x_i }[/math] = weighted values
[math]\displaystyle{ n_i }[/math] = number of observations
≥1 cell expected value <5

Fisher's exact test

  • very rare in real researches

Comparing Means

Parametric
i.e., normally distributed
Non-parametric
i.e., not normally distributed
Independent samples
(Unpaired in case of two)
Dependent samples
(Paired in case of two)
Independent samples
(Unpaired in case of two)
Dependent samples
(Paired in case of two)
2 means

Enough large sample

  • Z test
[math]\displaystyle{ \begin{align} z & = \frac{\bar{x_1}-\bar{x_2}}{SE_{(\bar{x_1}-\bar{x_2})}} \\ & = \frac{\bar{x_1}-\bar{x_2}}{\sqrt{\frac{s_1^2}{n_1}+\frac{s_2^2}{n_2}}} \end{align} }[/math]
  • Paired Student's t test
[math]\displaystyle{ \begin{align} paired\ t & = \frac{\bar{d}}{SE_d} \\ & = \frac{\bar{d}}{\frac{s}{\sqrt{n}}} \\ \end{align} }[/math]
where [math]\displaystyle{ \bar{d} }[/math] is the mean of differences of paired observations
  • Wilcoxon rank sum test
    =Mann-Whitney test
[math]\displaystyle{ H_0 }[/math] is distribution of outcomes in both two populations are the same
  1. To rank whole combined observations of two groups
  2. To separate back the ranks into two groups
  3. To look up critical range relevant to sum of ranks in the group of smaller number of observation
  • Wilcoxon signed rank test
here 'signed' means 'take into account signs of differences of paired data'
Small sample size <30 in a group
  • Student's t test
[math]\displaystyle{ \begin{align} t & = \frac{\bar{x_1}-\bar{x_2}}{SE_{(\bar{x_1}-\bar{x_2})}} \\ & = \frac{\bar{x_1}-\bar{x_2}}{\sqrt{\frac{(n_1-1)s_1^2+(n_2-1)s_2^2}{(n_1-1)+(n_2-1)}}\sqrt{\frac{1}{n_1}+\frac{1}{n_2}}} \end{align} }[/math]
Large discrepancy in SDs between groups
  • Bootstrap
  • Non-parametric
  • Fisher-Behrens
  • Welch
≥ 3 means
  • One-way ANOVA
  • Linear-regression model
  • Repeated measures ANOVA
  • Kruskall-Wallis test
[math]\displaystyle{ H_0 }[/math] is distribution of outcomes in all populations are the same
  1. To rank whole combined observations of all groups
  2. To separate back the ranks into original groups
  3. To make sum of ranks in each group
[math]\displaystyle{ H = \frac{n-1}{n} \sum_{i=1}^k \frac{n_i(\bar{R}-E_R)}{s^2} }[/math]
[math]\displaystyle{ H }[/math] is Kruskal-Wallis statistics
[math]\displaystyle{ n_i }[/math] is number of observations in group [math]\displaystyle{ i }[/math]
[math]\displaystyle{ \bar{R} }[/math] is the mean of rank sum in group [math]\displaystyle{ i }[/math]
[math]\displaystyle{ E_R }[/math] is expected value of the rankings
[math]\displaystyle{ s^2 }[/math] is the variance of rank
To look up critical values relevant to sum of ranks in the group of smaller number of observation

*needs try to transform data into parametric (e.g., logarithmic), or other considerations