I am a data scientist working on time series forecasting (using R and Python 3) at the London Ambulance Service NHS Trust. I earned my PhD in cognitive neuroscience at the University of Glasgow working with fmri data and neural networks. I favour linux machines, and working in the terminal with Vim as my editor of choice.
The study of statistics and data is the study of variation. The general situation is that we are interested in estimating a parameter of a population based on a sample drawn from that population.
Here we introduce some of the key basic measures.
The variance of a set of values $x$ is the sum of the squared deviation of each value from the mean of all the values $\bar x$, divided by one less than the number of observations (this gives the 'sample variance', whereas omitting the $-1$ would instead give the 'population variance'. When $n$ is large, it doesn't particularly matter). In essence it's the average squared deviation from the mean in the data:
In matrix notation:
We can compare the variation between two different variables and see if they 'covary' or not. This is called the covariance and is computed just like the variance, only it's between two sets of observations $x$ and $y$. In essence this is the average 'agreement' in the deviations from each variable's mean:
In matrix notation:
If we have observations of several variables, with each set of observations forming a column of a matrix $A$, then computing all of the dot products between each column is just what happens with $A^T A$. So if we use the centering matrix to remove the mean of each set of observations with
(since C is symmetric and idempotent) and divide this by one less than the number of observations we get the 'covariance matrix':
Note the variance $\sigma^2$ of each column sitting along the diagonal.
We define a method that by default computes the covariance among the columns of an input matrix A, yielding the sample covariance (i.e. using the $n-1$ correction).
First we transpose the matrix if the user wants to take rows as variables. Then we zero center the matrix, perform the $A^T A$ step and normalise by the number of observations (subtracting 1 by default, unless sample is False):
A method for the variances of the columns of a matrix, rather than the full covariance matrix, is achieved by extracting the values from the diagonal of the covariance matrix. We transpose the vector from a row to a column in the case that we computed the variance across the rows of the matrix:
While the variance of, and covariance between, a set of variables is useful, they are squared values and as such are not on the same scale as the measured data. Therefore a useful measure when reasoning about the data is the square root of the variance. This is called the standard deviation:
If we divide the standard deviation by the square root of the number of observations, we have the standard error of the mean. This indicates the uncertainty around the sample mean as an estimate of the population mean (since the sample mean will not match the population mean exactly). The $\sqrt{n}$ term is like a 'penalty' incurred for making the 'leap' of inference to the population and this penalty decreases as your sample size increases:
It is simple to convert from the variance to the standard deviation and from the standard deviation to the standard error:
We create a matrix, call the methods and print the results (notice when we use the population covariance by setting the sample argument to 'False' the values are a bit smaller):
Outputs:
back to project main page
back to home