Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Understanding Confidence Intervals, Z-Scores, and t-Distributions, Study notes of Statistics

An in-depth explanation of confidence intervals, z-scores, and t-distributions. It covers the relationship between these concepts, the differences between normal and t-distributions, and how to convert between raw data, t-values, and p-values. Additionally, it explains how to calculate confidence intervals using r and the significance of z-scores. Students will gain a solid understanding of these statistical concepts and their applications.

Typology: Study notes

2021/2022

Uploaded on 09/12/2022

selvam_0p3
selvam_0p3 ๐Ÿ‡บ๐Ÿ‡ธ

4.3

(15)

233 documents

1 / 4

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
9. Confidence Intervals and Z-Scores
Weโ€™re accumulating a lot of terminology to refer to locations or areas of distributions: standard error,
standard deviation, percentile, quartile, etc. This lab will help clarify how all these relate to one another.
We will also look at the difference between a normal population distribution and a sampling t-distribution.
Much of what is covered here progresses straight into t-tests, which we will cover in the next lab.
9.1 t-distribution (a.k.a. The Student t-distribution)
๏‚ท While the normal distribution typically describes our raw data, the t-distribution is the distribution of
our sample statistics (e.g. sample means). The t-distribution is very similar to the standard normal
distribution in shape however the tails of the t-distribution are generally fatter and the variance is
greater than 1.
๏‚ท Note that there is a different t-distribution for each sample size, in other words, it is a class of
distributions. So when we speak of a specific t-distribution, we have to specify the degrees of
freedom, where . As our sample size increases (and degrees of freedom increase), the t-
distribution approaches the shape of the normal distribution.
The normal distribution vs the t-distribution
9.2 Converting between raw data, t-values, and p-values
Percentile - the value below which a given percentage of observations within a group fall
Quartile - (1st, 2nd, 3rd, 4th) points that divide the data set into 4 equal groups, each group comprising a
quarter of the data
Alpha level - predetermined probability where we make some sort of decision
P-value - (percentiles) the probability the observed value or larger is due to random chance
Critical t-value - the t-value that corresponds to the ๐›ผโˆ’๐‘™๐‘’๐‘ฃ๐‘’๐‘™
Actual t-value - the t-value that corresponds to the raw data value being tested with the ๐›ผโˆ’๐‘™๐‘’๐‘ฃ๐‘’๐‘™ (signal-
to-noise ratio)
Signal - the difference between the test and mean values
Noise โ€“ a measure of the distribution of the data
pf3
pf4

Partial preview of the text

Download Understanding Confidence Intervals, Z-Scores, and t-Distributions and more Study notes Statistics in PDF only on Docsity!

9. Confidence Intervals and Z-Scores

Weโ€™re accumulating a lot of terminology to refer to locations or areas of distributions: standard error, standard deviation, percentile, quartile, etc. This lab will help clarify how all these relate to one another. We will also look at the difference between a normal population distribution and a sampling t-distribution. Much of what is covered here progresses straight into t-tests, which we will cover in the next lab.

9.1 t-distribution (a.k.a. The Student t-distribution)

๏‚ท While the normal distribution typically describes our raw data, the t-distribution is the distribution of our sample statistics (e.g. sample means). The t-distribution is very similar to the standard normal distribution in shape however the tails of the t-distribution are generally fatter and the variance is greater than 1.

๏‚ท Note that there is a different t-distribution for each sample size, in other words, it is a class of distributions. So when we speak of a specific t-distribution, we have to specify the degrees of freedom, where. As our sample size increases (and degrees of freedom increase), the t- distribution approaches the shape of the normal distribution.

The normal distribution vs the t-distribution

9.2 Converting between raw data, t-values, and p-values

Percentile - the value below which a given percentage of observations within a group fall

Quartile - ( 1st, 2nd, 3rd, 4th) points that divide the data set into 4 equal groups, each group comprising a quarter of the data

Alpha level - predetermined probability where we make some sort of decision

P-value - ( percentiles) the probability the observed value or larger is due to random chance

Critical t-value - the t-value that corresponds to the ๐›ผโˆ’๐‘™๐‘’๐‘ฃ๐‘’๐‘™

Actual t-value - the t-value that corresponds to the raw data value being tested with the ๐›ผโˆ’๐‘™๐‘’๐‘ฃ๐‘’๐‘™ (signal- to-noise ratio)

Signal - the difference between the test and mean values

Noise โ€“ a measure of the distribution of the data

9.3 Confidence Intervals

๏‚ท The following points discuss and provide examples of how we can calculate confidence intervals and convert between the above scales in R

๏‚ท The functions qnorm()and pnorm()convert from units of standard deviations (= standard error in this case) to percentiles (= probabilities) (and vice-versa) for a normal distribution. Remember, pnorm() expects a standard deviation / error and returns a percentile / probability and qnorm() does the opposite.

pnorm(x,mean,sd)

๏‚ท The command above will return the area to the left of x under the normal distribution curve with a mean of mean and a standard deviation of sd. If you donโ€™t specify mean or sd, R will assume a mean of zero and an SD of 1. You can use the qnorm()statement to work things backwards from probabilities also:

qnorm(p,mean,sd)

๏‚ท The command above will return the number x for which the area to the left of x under the normal distribution curve with a mean of mean and an standard deviation of sd is equal to p. Again, if you donโ€™t specify mean or sd, R will assume a mean of zero and an SD of 1. That in mind, we can calculate some basic confidence intervals with these commands.

๏‚ท We know that one standard error of the mean (SEx) for large sample sizes (or one standard deviation of a normally distributed population) is equivalent to the ~68% confidence interval of the mean because ~34% of values fall within 1 SE either side of the mean. We can confirm this with the pnorm() command. Remember that pnorm() gives you the total area under the curve to the LEFT of the number that you specify.

pnorm( 1 ) pnorm( -1 ) qnorm( 0.16 ) qnorm( 0.84 )

Then, remembering the formula for confidence intervals:

mean( VarA ) + sd( VarA )/sqrt( 4 ) * qt( 0.975 , 3 ) mean( VarA ) - sd( VarA )/sqrt( 4 ) * qt( 0.975 , 3 )

๏‚ท I hear you saying โ€œWow, that seems like a lot of work. I mean, six lines of code? You have to be kidding me.โ€ Well, thankfully there is a shortcut in R. The t-test function in R (which we will work with more in the next lab), also returns confidence intervals for a sample. You can try it out now:

t.test( VarA , conf.level= 0.95 ) #returns the 95% confidence interval t.test( VarA , conf.level= 0.90 ) #returns the 90% confidence interval

9.4 Z-Score (a.k.a. Standard Score)

๏‚ท A z-score is a metric of where a given value fits within a distribution (a normal distribution, to be precise) in the units of standard deviations of the distribution. To accomplish this, we need to create a z-distribution, which is just our distribution of scores with the mean adjusted to zero and the standard deviation adjusted to one. But in reality, we donโ€™t need to create an entirely new distribution. We can just adjust our one score of interest to express it as a number of standard deviations from the mean.

๏‚ท Z-score is calculated as the value of interest, minus the mean of the distribution, then divided by the standard deviation of the distribution: z = (x โ€“ ฮผ) / ฯƒ. This expresses z as a deviation from the mean, relative to the standard deviation (in โ€œunitsโ€ of standard deviations).

๏‚ท We can use z-scores to compare one score to a distribution of scores. Say we have a new unidentified lentil plant in our experiment with a yield of 690. We suspect that it belongs to Variety A, but we are not sure. We can calculate a z-score for the new plant (since we know the mean and standard deviation of Variety A, which we are considering a population):

mean( VarA ) sd( VarA ) z = ( 690 โ€“ mean( VarA )) / sd( VarA ) z

๏‚ท We can then use pnorm() to determine the percentile of the new lentil value (the probability that it belongs to the population of Variety A). We will have to decide on a cutoff value that we are comfortable with. This should be done ahead of time. This is very similar to a one-sample t-test, which we will talk about more in Lab 11.

pnorm( z )

CHALLENGE:

  1. What is the difference between a Normal distribution and a t-distribution? When are they similar?
  2. When is a z-score a better metric to use then a t-score? Why do we usually use the t-scores for statistical testing?
  3. The yield of a variety of wheat was measured in a replicated and randomized experiment, and yield was found to be approximately normally distributed. The 2nd^ and 98th^ percentile were 29 and 41 kg/ha, respectively. What is the approximate standard deviation?