

Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The robustness of t-methods in data analysis when dealing with non-normality and outliers. It explains the importance of checking the nearly normal assumption and provides methods for detecting non-normality, such as box plots, histograms, and stem plots. The document also covers the effects of skewness, outliers, and sample size on the use of t-methods and suggests alternatives like nonparametric methods for handling outliers.
Typology: Exams
1 / 2
This page cannot be seen from the preview
Don't miss anything!
Stat 4473 – Data Analysis Robustness of t-methods
is normally distributed. Practically speaking, there’s no way to be certain this is true. (In fact, it’s almost certainly not true, since the normal distribution is an idealized population model.) The usefulness of the t-methods is dependent on how strongly they are affected
normality of the population except when outliers or strong skewness are present.
Defn: A confidence interval or hypothesis test is called robust if the confidence level or p- value does not change much when the conditions for use of the procedure are violated.
We might say that before using the t-methods, we should check the “nearly normal assumption” that the data appear close to normal — symmetric, single peak, no outliers. We will use box plots, histograms and/or stem plots, and formal tests of normality to check the nearly normal assumption. Modified boxplots are often useful in spotting outliers.
Ways to not be normal
plot of the data shows two or more peaks. When you see this, look for the possibility that your data come from two groups. If so, try to separate the data into its separate groups, then analyze each group separately.
skewed, you can’t use the t-methods unless your sample is large.
Outliers should be investigated. Sometimes, it’s obvious that a data value is wrong and the justification for removing or correcting it is clear. When there’s no clear justification for removing outliers, you might want to run the analysis both with and without the outliers and note any differences in your conclusions. Any time data values are set aside, you must report on them individually. An analysis of the non-outlying points, along with a separate discussion of the outliers, is often very informative and can reveal important aspects of the data. See more about handling outliers on the next page.
P.S. As tempting as it is to get rid of annoying values, you can’t just throw away outliers and not discuss them. It isn’t appropriate to lop off the highest or lowest values just to improve your results.
the sample size. Unfortunately, it matters most when it’s hardest to check. Here are some practical guidelines for using the t-methods to perform hypothesis tests and
Sample size less than 15: The data should appear close to normal (symmetric, single peak, no outliers). With so little data, it’s rather hard to tell. But if you do find outliers or skewness, you should not use t.
Sample size at least 15: The t-methods can be used except in the presence of outliers or strong skewness. The t methods will work well as long as the data are single peaked and reasonably symmetric.
Large samples: The t-methods can be used even for clearly skewed distributions when the
perform the analysis twice, reporting the results with and without the outliers, even for large samples. See below.
Outliers: Note that the t-methods are not resistant to the effects of outliers (since the means and standard deviations on which they are based are not resistant to the effects of outliers). The nonparametric methods that serve as alternatives to the t-methods are based on ranks and ARE resistant to the effects of outliers.
One approach to handling outliers is to perform the analysis with and without the outlier(s), if the sample size is reasonable. Sometimes the outliers don’t affect the analysis anyway. If the presence or absence of the outlier doesn’t make any significant difference in the results of the analysis, then its presence is of no concern. If, on the other hand, the presence of the outlier does make a significant difference in the results of the analysis, then a statistical method that is resistant to outliers should be used.
The nonparametric alternatives to the t-methods are based on ranks and have dual usage: (1) as distribution-free alternatives to the t-methods when the normality assumption is violated, and (2) as resistant (i.e., resistant to the effect of outliers) alternatives to the t-methods when outliers affect the analysis (even if the normality assumption checks out).