\boxed{ \text{Estimators: } \begin{cases} \bar{X} &= \hat{\mu} \\ s &= \hat{\sigma} \end{cases} } \\ \tiny \textit{sample mean estimates $\mu$; } \textit{sample sd estimates $\sigma$: }
Vocab
Estimation: Using a sample statistic to predict the value of unknown population parameter(s).
- Estimator: The sample statistic used for the estimation.
- Estimate: The value of the estimation.
Point Estimate: Estimate of a population parameter that is a single numerical value.
Interval Estimate: Interval around the point estimate likely to contain the corresponding population parameter.
\boxed{ \text{Standard Error: } SE_{\bar{X}} = \frac{s}{\sqrt{n}} }
On Sample Error v. Sample Deviation:
- SE measures uncertainty in a sample statistic.
- SD measures dispersion of data.
On n
\boxed{ n = \left( \frac{ z_{\alpha / 2} \sigma }{ SE } \right)^2 }
- Raising n reduces size of error margin.
- Increasing n can be costly or unethical.
- If getting n for a SE, round n up.
\boxed{
\text{C.I. of $\mu$: } \bar{X} \pm t_{\alpha / 2, n - 1} \frac{s}{\sqrt{n}}
} \\
\small\textit{when $\sigma$ is known}Why?
\bar{X} \pm t SE_{\bar{X}}
=
\bar{X} \pm t_{\alpha / 2, n - 1} \frac{s}{\sqrt{n}}
More on t-distribution:
- Heavier tails than z.
- Approaches normal (z) as n grows.
- Uses df (degree of freedom)
On Empirical Rule v. Confidence Interval
- CI: “We’re X% confidence that \mu is between x and y.”
- Empirical Rule: “X% of data is between x and y.”
- Wider than CI.
Null Hypothesis (H_0): Presumed to be true initially.
Alternative Hypothesis (H_1): \bar{H_0}, what we hope to prove.
Note: Write conclusions in terms of H_0
- e.g., “accept H_0” or “reject H_0”
On Types of Errors:
Verdict H_0 is true H_1 is true Accept H_0 Okay Type II Error Reject H_0 Type I Error Okay \text{What $\alpha$ and $\beta$ mean: } \begin{cases} \alpha &= P(\text{committing Type I error}) \\ \beta &= P(\text{committing Type II error}) \end{cases}
\boxed{ \text{Test Statistic: } T = \frac{ \bar{X} - c }{ s / \sqrt{n} } \sim t(n-1) } \\ \small\textit{under $H_0 : \mu = c$}
\boxed{ \text{Reject $H_0$ if: } \begin{cases} H_1 : u \ne c &\quad |T| > t_{\alpha / 2 , n - 1} &\quad \text{(two-tailed)} \\ H_1 : u > c &\quad T > t_{\alpha , n - 1} &\quad \text{(right-tailed)} \\ H_1 : u < c &\quad T < - t_{\alpha , n - 1} &\quad \text{(left-tailed)} \end{cases} }
On Telling Tails:
- Check H_1 against the above cases (or, draw the curve)
On Two-Tail Tests and Confidence Intervals:
- CI and two-tailed tests use the same quantities.
- If H_0’s \mu \in CI, then accept H_0.
- (as long as the CI and two-tailed test have same \alpha)
p-value: Measure of evidence against H_0
\boxed{ \begin{aligned} \text{p-value: }& \begin{cases} H_1 : u \ne c &\quad \text{p-value} = 2 p(t > |T|) \\ H_1 : u > c &\quad \text{p-value} = p(t > T) \\ H_1 : u < c &\quad \text{p-value} = p(t < T) \end{cases} \\~\\ &\qquad \text{if p-value < $\alpha$, reject $H_0$} \end{aligned} }