I am a data scientist working on time series forecasting (using R and Python 3) at the London Ambulance Service NHS Trust. I earned my PhD in cognitive neuroscience at the University of Glasgow working with fmri data and neural networks. I favour linux machines, and working in the terminal with Vim as my editor of choice.
The dot product is only valid for pairs of vectors (with the same number of entries). You simply match up each element between the vectors, multiply them and add the results:
$$ \begin{equation} u \cdot v =% \begin{bmatrix} 1 \\ 2 \\ 3 \end{bmatrix} \cdot \begin{bmatrix} 4 \\ 5 \\ 6 \end{bmatrix} = 32 \end{equation} $$We make both vectors row vectors and then carry out the multiplication and sum operations.
We create two vectors, call the dot method and print the result:
Outputs:
The length of a vector $u$ is defined as the square root of the dot product of the vector with itself: \(\sqrt{u \cdot u}\).
We create a vector, call the length method and print the result:
Outputs:
To normalise a vector we just divide by the length:
We create a vector, call the norm method and print the result. We also confirm that the length of the normalised vector is 1 (ignoring rounding errors after 15 decimal places):
Outputs:
At a low level, matrix multiplication can be seen as a series of dot products between the rows of one matrix and the columns of another. For example, the (2,3) entry of a matrix $C$ produced through the multiplication of two other matrices $A$ and $B$, would be the dot product of the 2nd row of $A$ with the 3rd row of $B$
To do this we can make use of the transpose and dot product methods we already defined: We transpose the second matrix, then for each row of the first matrix we take the dot product with every 'row' of the transposed second matrix (the rows in the transposed matrix were formerly columns):
We create two matrices, call the multiply method and print the result:
Outputs:
< Element-wise functions of one or more matrices
back to project main page
back to home