# Stats for Twits

A** simple guide to the use of hypothesis testing statistics**

This blog is not a comprehensive guide to statistics. It is intended to remind/inform you of the general format and structure of hypothesis testing statistics. It does not tell how to do any actual tests.

Even if you think you know nothing about statistics, it’s almost certain that you do. You will probably have heard of terms like the “average” value or maybe the “range” of some data or possibly its “standard deviation” or “variance”. All of these things tell you something about a set of data, they are known as **descriptive statistics**. The statistics we are concerned with here are called **hypothesis-testing statistics**. For the most part you will be using them to compare one set of data with another set of data.

The format you follow is similar for most of these tests:

** Invent a null hypothesis**

Using your devastating powers of observation, you have noticed that adult humans seem on the whole to be taller than baby ones. You could easily investigate this by measuring some of each kind and comparing the average values of the two sets of data.

An **hypothesis** is simply a statement which offers an explanation of your observations. In this case our **experimental hypothesis** might be that all adult humans have had special cosmetic surgery to lengthen their legs and make them taller than babies. Alternatively, we might suggest that adults have been around for longer and therefore have grown bigger. Both of these would be experimental hypotheses, the latter being the more reasonable one.

A **null hypothesis** is a special sort of hypothesis which you invent purely for the purpose of doing the statistical test. It does not have to agree with your experimental hypothesis. The word **null** means a condition of nothingness or lacking any distinction. A null hypothesis is sometimes called an hypothesis of no difference. It is always stated as though there were no difference between the two things you are comparing. If we were doing a test that compared our two average (or mean) heights a suitable null hypothesis would be:

** There is no significant difference between the means of the two sets of data**

Remember, it might be obvious that there is a difference but you state it like this anyway. Having done the statistical test you will end up either accepting or rejecting this statement.

**Calculate the value of the test statistic**

All the tests do something different but the general pattern of what you do is the same. The next thing you do is use your data to calculate a value of the test statistic you are using (this will have a name, usually a letter like “t”, “U”, “r_{s}”). You calculate a value that is specific for your data.

**Find the critical value of the test statistic**

Statisticians are very clever (except for Heronimous Bing of Oxford, he is thick), they have spent a long time working out what are known as **critical values** of test statistics for all combinations of circumstances and sets of data. You must extract from one of their tables of **critical values** the value that applies to your combination of circumstances. What the value is depends on the number of items of data in each data set and the degree of precision you want to use in either accepting or rejecting your null hypothesis. This is the real value of these techniques, they allow you to say how certain you are when you either accept or reject the null hypothesis. You get to choose how certain you want to be.

Here is part of a table of critical values for a statistic called Spearman’s rank correlation coefficient:

You enter the table at the appropriate number of pairs of data (let’s say we have 9 pairs of data). The critical value lies somewhere along this row.

You will have noticed that each column is headed with a percentage significance level. This is the part where you get to pick the degree of precision or confidence you want in either accepting or rejecting your null hypothesis.

Lets us say that you wanted to be as certain as you could be (using our table above) that you would be correct in accepting or rejecting your null hypothesis. Enter the table at the 1% significance column and find the appropriate critical value by going along the 9 pairs of data row. As you can see the critical value is 0.833.

With this particular test, if the value you’ve calculated for your own data is the same or bigger than this you reject the null hypothesis. If the value for your data is smaller than the critical value you accept the null hypothesis. In accepting or rejecting it at the 1% significance level you are saying: “If I did this test a very large number of times I would expect to be correct in accepting or rejecting my null hypothesis 99% of the time. I would expect a different result due to chance only 1% of the time”. Put simply (and not quite accurately but hopefully you know what I mean): “I’m 99% certain that I’m right in accepting or rejecting my null hypothesis”.

If you are not so concerned with being near certain you can pick a bigger % significance level. If you picked the 5% level the critical value would be 0.683. This is smaller than the critical value for 1% significance and it will be easier for your value (calculated from your own data) to beat it and reject the hypothesis of no difference. However if you do it at this level you would expect different results due to chance 5% of the time. In other words 95 times out of a 100 you’d expect to be correct in accepting or rejecting your null hypothesis. 5 times out of 100 you’d expect a different result due to chance.

There is no law about what level of significance you choose but given the inherent variability of biological systems (or cussedness) it has become generally accepted that a level of 5% is acceptable for field data.

**Tests you might use and what they do:**

A t-test will tell you if the means of two sets of continuous data, with interval level measurements are significantly different to one another. (If you have a big sample (25+) you can use it for count data as well). Some people call it a z test when you have a big sample but we use the same formulae. For any T test you do the null hypothesis will be: *There is no significant difference between the means of the 2 sets of data*

A Mann-Whitney U-Test compares the medians of two sets of data and can be used on interval or ordinal data. You can also use it on data that is not normally distributed (unlike a t-test) and for as few as four pieces of data in each sample. For any Mann-Whitney U-Test you do the null hypothesis will be: *There is no significant difference between the medians of the two sets of data*

A chi^{2} test does a lot of things but for the most part we use it in a simple way to see if an observed set of data (which has to be counts of things in categories (frequencies)) differs significantly from what we might expect, given our null hypothesis. For any chi^{2 } test you do the null hypothesis will be: *There is no significant difference between the observed and the expected frequencies*

Spearman’s rank correlation coefficient (snappy name eh?) will tell you whether 2 variables are correlated. I.e. Does one variable change as the other one changes? It will tell you whether the relationship is positive (both go up together) or negative (one goes up as the other goes down) and the strength of any correlation. For any Rearman’s spank correlation coefficient you do the null hypothesis will be: *There is no significant correlation between the 2 variables*

It conveys a much better impression that you know what you are doing if you customize the general null hypothesis to make it pertinent to your own investigation. For example:

* There is no significant difference in mean mass of customers patronizing MacSmug’s Lettuce Emporium and customers patronizing MacBloater’s Land of Grease Burger Bar*

The t-test and U-test are both looking for differences between two sets of data. The chi^{2} test and Spearman’s rank correlation coefficient are both looking for associations between two sets of data.

Finally, Greek letters do not seem to be catered for by this blog-host so here’s picture of what chi-squared looks like in the original Greek:

Watch out for the next blog, where we’ll live life even closer to the edge………

## Leave a Reply