Week 5

This week, we will review functions of random variables and introduce the idea of sampling distributions and Central Limit Theorem.
Published

September 19, 2022

Learning Outcomes

Monday

  • Central Limit Theorem

  • Normal Approximation of Binomial Distribution

  • Other Sampling Distributions

Wednesday

  • Bias and Mean Square Error

  • Unbiased Point Estimator

Reading

Day Reading
Monday’s Lecture MMS: 6.2-6.4
Wednesday’s Lecture MMS: 7.1

Homework

Homework 3 can be found here: https://m453.inqs.info/hws/hw3.html

It is due 9/23/2022 at 11:59PM.

Important Concepts

Central Limit Theorem

Let \(X_1, X_2, \ldots, X_n\) be identical and independent distributed random variables with \(E(X_i)=\mu\) and \(Var(X_i) = \sigma²\). We define

\[ Y_n = \sqrt n \left(\frac{\bar X-\mu}{\sigma}\right) \mathrm{ where }\ \bar X = \frac{1}{n}\sum^n_{i=1}X_i. \]

Then, the distribution of the function \(Y_n\) converges to a standard normal distribution function as \(n\rightarrow \infty\).

Normal Approximation of Binomial Distribution

Suppose \(X\sim Bin(n,p)\), furthermore, let \(\bar X = X/n\). If \(n\) is large enough, \(\bar X \overset{\circ}{\sim}N\left\{p,p(1-p)/n\right\}\).

Other Sampling Distributions

\(\chi^2\)-distribution

Let \(Z_1, Z_2,\ldots,Z_n \overset{iid}{\sim}N(0,1)\),

\[ \sum_{i=1}^nZ_i^2\sim\chi^2_n. \]

Let \(X_1, X_2,\ldots,X_n \overset{iid}{\sim}N(\mu,\sigma^2)\), \(S^2 = \frac{1}{n-1}\sum^n_{i=1}(X_i-\bar X)^2\), and \(\bar X \perp S^2\); therefore:

\[ \frac{(n-1)S^2}{\sigma^2} \sim \chi^2_{n-1}. \]

t-distribution

Let \(Z\sim N(0,1)\), \(W\sim \chi^2_\nu\), \(Z\perp W\); therefore:

\[ T=\frac{Z}{\sqrt{W/\nu}} \sim t_\nu \]

F-distribution

Let \(W_1\sim\chi^2_{\nu_1}\) \(W_2\sim\chi^2_{\nu_2}\), and \(W_1\perp W_2\); therefore:

\[ F = \frac{W_1/\nu_1}{W_2/\nu_2}\sim F_{\nu_1,\nu_2} \]

Unbiased Estimator

Let \(\hat \theta\) be an estimator for a parameter \(\theta\). Then \(\hat \theta\) is an unbiased estimator if \(E(\hat \theta) = \theta\). Otherwise, \(\hat\theta\) is considered biased.

Bias

The bias of a point estimator \(\hat \theta\) is defined as \(B(\hat\theta) = E(\hat\theta)-\theta\)

Mean Square Error

The mean square error of a point estimator \(\hat\theta\) is the expected value of \((\hat\theta-\theta)^2\):

\[ MSE(\hat\theta)= E\{(\hat\theta-\theta)^2\} \]

The mean square error can be rewritten as \(MSE(\hat\theta)=Var(\hat\theta)+B(\hat\theta)^2\)

Resources

You must log on to your CI Google account to access the video.

Press the “b” key to access hidden notes.

Lecture Slides Videos
Monday Slides Video
Wednesday Slides

Video

HW 3 Problem 2