I am a data scientist working on time series forecasting (using R and Python 3) at the London Ambulance Service NHS Trust. I earned my PhD in cognitive neuroscience at the University of Glasgow working with fmri data and neural networks. I favour linux machines, and working in the terminal with Vim as my editor of choice.
A common task is to check if the value of a population parameter, as estimated from a random sample of that population, differs meaningfully from zero (or some other number). We can form the null hypothesis that any deviation of our estimate from zero is the result of measurement error - and then see if the observed deviation is large enough to reject the null hypothesis. To do this, we could assume that such measurement errors are normally distributed around zero (this is often the case for samples greater than ~30, thanks to the central limit theorem). Then we could calculate the number of standard deviations our estimate is away from zero - rejecting the null hypothesis at some critical value (often set at about 2). This would be a z test.
However, it's uncommon to know the population standard deviation of the estimate in advance, and so it must too be estimated from the sample (this is the standard error):
In addition, if the sample is smaller than we'd like (say 10-20) then the distribution of the measurement errors are likely not well approximated by a normal distribution. Instead, the measurement errors would have more extreme values - that is, the distribution would have heavier tails - and would be better approximated by the Student's t distribution. The shape of the t-distribution varies with the sample size (becoming closer to the normal distribution as the sample size increases) and thus so does the critical value of t that we would require to reject the null hypothesis.
Here we define a method that performs a one sample t test on each column (or row) of a matrix. We compute the mean and standard error of each column (or row) and perform an element wise division. The vector of t values is returned along with the degrees of freedom:
In the case where we have two samples of observations where each observation in one sample is 'paired' with an observation in the other sample (e.g. if a set of measurements is repeated at two times), then we can compute these paired differences and run same test as above:
If on the other hand we have two samples which do not have a simple pairing, then the variation of the two samples cannot be considered 'shared' and used to normalise the observed difference as before.
Instead, we have to normalise by the 'pooled' standard deviation (which itself gives rise to a family of 'pooled' standard error). If the number of observations in each sample were equal, we could just average the two variances:
And then instead of obtaining the standard error by using $\sqrt{n}$, we would use $\sqrt{2/n}$ (because we have one less degree of freedom due to the second sample):
When the sample sizes are different, we can take a family of 'weighted' average of the variances - weighted, that is, by the proportion of total degrees of freedom each sample accounts for. Thus the 'pooled' standard deviation in that case is:
Just like in equation $(3)$ above, we account for having one less degree of freedom when converting our pooled standard deviation to a pooled standard error:
Since we allow for different sample sizes, we require two vectors, u and v. Unless explicitly told to assume equal variances, we perform Welch's unpaired t test (see below). Otherwise, we compute a few preliminaries, then proceed to the pooled variance and pooled standard deviation. Finally, we compute and return the t statistic (along with the degrees of freedom):
The unpaired t test above pooled the variances, taking into account the unequal sample sizes but assuming that the magnitude of the variation did not differ between the two samples. If the variance does differ, we can't pool them. A method for dealing with this situation is Welch's unpaired t test. The method is robust and gives very similar results to the pooling method above when variances are equal. For this reason, many statistical software packages default to this method whenever doing unpaired t tests.
Conceptually simple, Welch's t is calculated by dividing the mean difference between the two samples by the square root of the sum of the squared standard errors:
To calculate the degrees of freedom, we can use the Welch–Satterthwaite equation:
We create a matrix with three columns - the third being more variable. We then store the each column separately as u, v and z, and extend z by concatenating it with v. We call the various ttest methods and print the results:
Outputs:
back to project main page
back to home