# [Math Review] Statistics Basics: Main Concepts in Hypothesis Testing

# Case Study

**whether physicians spend less time with obese patients**. Physicians were sampled randomly and each was shown a chart of a patient complaining of a migraine headache. They were then asked to estimate how long they would spend with the patient. The charts were identical except that for half the charts, the patient was obese and for the other half, the patient was of average weight. The chart a particular physician viewed was determined randomly. Thirty-three physicians viewed charts of average-weight patients and 38 physicians viewed charts of obese patients.

# Null Hypothesis

**an apparent effect is due to chance**is called the

**null hypothesis**. Keep in mind that the null hypothesis is typically the opposite of the researcher's hypothesis. If the null hypothesis is rejected, then the alternative to the null hypothesis (called the

**alternative hypothesis**) is accepted.

In the Physicians' Reactions study, the researchers hypothesized that physicians would expect to spend less time with obese patients. The null hypothes is that the two types of patients are treated identically is put forward with the hope that it can be discredited and therefore rejected. So the null hypotheis is

H_{0}: μ_{obese} = μ_{average}

# Probability Value

**It is not the probability of the hypothesis given the outcome. If the probability of the outcome given the hypothesis is sufficiently low, we have evidence that the hypothesis is false. In other words,**

**The probability value is the probability of an outcome given the NULL hypothesis.****a low probability value casts doubt on the null hypothesis**.

In the physician reaction study, we compute the probability of getting a difference **as large or larger than** the observed difference (31.4 - 24.7 = 6.7 minutes) if the difference were, in fact, due solely to chance. This probability can be computed to be 0.0057. Since this is such a low probability, we have confidence that the difference in times is due to the patient's weight and is not due to chance.

# Significance Testing

The probability value below which the null hypothesis is rejected is called the **α level or simply α**. It is also called the **significance level**. When the null hypothesis is rejected, the effect is said to be **statistically significant**. It is very important to keep in mind that statistical significance means only that the null hypothesis of exactly no effect is rejected; it does not mean that the effect is important. **Do not confuse statistical significance with practical significance**.

## Two ways of significance tests

- A significance test is conducted and the probability value reflects the strength of the evidence against the null hypothesis. Higher probabilities provide less evidence that the null hypothesis is false. (For scientific research)

Probability | Meaning |

p<0.01 | The data provide strong evidence that the null hypothesis is false. |

0.01<p<0.05 | The null hypothesis is typically rejected, but not with as much confidence as it would be if the probability value were below 0.01. |

0.05<p<0.1 | The data provide weak evidence against the null hypothesis and are not considered low enough to justify rejecting it. |

- Specify an α level before analyzing the data. If the data analysis results in a probability value below the α level, then the null hypothesis is rejected; if it is not, then the null hypothesis is not rejected. If a result is significant, then it does not matter how significant it is.
If it is not significant, then it does not matter how close to being significant it is.

# Type I and II Errors

**Type I error (弃真错误)** occurs when a significance test results in the rejection of a true null hypothesis. α is the probability of a Type I error given that the null hypothesis is true.

**Type II error (弃伪错误)** is failing to reject a false null hypothesis. If the null hypothesis is false, then the probability of a Type II error is called **β (beta)**. The probability of correctly rejecting a false null hypothesis equals 1- β and is called **power**. Actually, a Type II error is not really an error. When a statistical test is not significant, it means that the data do not provide strong evidence that the null hypothesis is false. Lack of significance does not support the conclusion that the null hypothesis is true. One way to decrease the value of β is to increase the volume of samples. With the constance volume of samples, β will increase with smaller value of α. In practice, we should perform a trade of between α and β.

# One- and Two-Tailed Tests

**one-tailed tests**; those that compute two-tailed probabilities are called

**two-tailed tests**.

Whether it's a one-tailed test or two-tailed test depends on the way the question is posed. If we are asking whether physicians spend different time with obese patients, then we would conclude they do if they spent either much more than chance or much less than chance. So the null hypothesis for the two-tailed test is

H_{0}: μ_{obese} = μ_{average}

If our question is whether physicias spend less time with obese patients, we would use a one-tailed test and the null hypothesis is

H_{0}: μ_{obese} ≥ μ_{average}

# Significance Testing and Confidence Intervals

- The 95% confidence interval corresponds to 0.05 significance level. The 99% confidence interval corresponds to 0.01 significance level.
- Whenever an effect is significant, all values in the confidence interval will be on the same side of zero. Therefore, a significant finding allows the researcher to specify the direction of the effect.
- If the 95% confidence interval contains zero (more precisely, the parameter value specified in the null hypothesis), then the effect will not be significant at the 0.05 level. That is why
**the null hypothesis should not be accepted when it is not rejected**.