We are teaching t test and ANOVA this week in the introductory ecology lab. As I prepare the teaching materials, one thing about t test caught my attention. In many statistics classes, I was taught that the variables under comparison should be normally distributed within each group to warrant the application of t test. If the distribution of the variable is not normal, we should either consider transformation of variable to conform normality, or use nonparametric methods that do not require distributional assumptions. I think I am not alone here. But as I reviewed t test, I start to think that you don’t necessarily need the normality assumption to make t test valid. Here is why.

A key place to start is to examine how normality assumption is used to validate t test. If the variable of interest is normally distributed, the sample mean () is normally distributed. Then follows a standard normal distribution . If we replace the unknown with the sample standard deviation , follows a distribution with degrees of freedom. This is how t test is derived. The proof of the last step is available in most mathematical statistics textbooks. Essentially, you have to prove 1) that and are independent, and 2) that follows a chi-square distribution with degrees of freedom.

Technical part aside, we see that the normal distribution of is key to the derivation of t test. The normal distribution of results from the fact that is normally distributed. This suggests that the normality assumption of the variable is a sufficient assumption to validate t test. But it is not a necessary assumption. The key here is the normality of . As long as this holds, t test is valid.

In fact, can be normally distributed even if is not. As a result of central limit theorem, the distribution of sample mean approaches a normal distribution when the sample size increases. So as long as your sample size is big, the distribution of is asymptotically normal. That warrants the validity of t test even if the variable does not have a normal distribution itself.

This is not to say you can just use t test no matter what. One key element for the normality of is a large sample size. How large is large ? It depends. It depends on how “un-normal” the variable is. It is hard to come up with a universal rule. Some people suggest that a sample size of 20-30 is usually sufficient.

This is also not to say you should always use t test. Certainly, you can apply t test when the variable is not normally distributed. It is a valid test. But it might not be the most powerful test. Research suggests that nonparametric counterpart of t test, such as the Wilcoxon rank sum test, can outperform t test in situations when the variable is not normally distributed. At least, it seems to be reasonable to try these nonparametric method if the variable under comparison clearly deviates from normal distribution.

Chao,

First off, I enjoy the blog quite a bit! I’ve often wanted to do something like this, and its great to have someone to look up to as I consider making a blog in the upcoming years.

This is a nice discussion of t-tests! One thing I’d note here is that the “20-30” suggestion for sample sizes is actually quite good when the data exhibit a low degree of skewness. With skewed data, its generally much wiser to change the “20-30” Suggestion to be something much higher.

-Rich

LikeLiked by 1 person

Excellent point. I have not thought about the sample size issue in details. Maybe a simulation is helpful. I have no idea whether we can analytically show the proper sample size for a particular degree of skewness in the data.

LikeLike