I am a data scientist working on time series forecasting (using R and Python 3) at the London Ambulance Service NHS Trust. I earned my PhD in cognitive neuroscience at the University of Glasgow working with fmri data and neural networks. I favour linux machines, and working in the terminal with Vim as my editor of choice.
Having the inverse method allows to revisit $EA = U$ and move to a better
formulation of the same idea: $A = LU$. The matrix $L$ is lower triangular and
takes $U$ back to $A$ by applying the elimination steps in the reverse order.
The improvement of this formulation is that the elimination steps don't
mix. In $EA = U$ we see a $-10$ sitting in the (3,1) position of $E$:
That $-10$ comes from the fact that we subtracted 4 of the 1st row from the
3rd, and subtracted 2 of 1st row from the 2nd, then added 3 of this new 2nd
row (effectively subtracting 6 of the 2nd row) from the 3rd: so overall $3
\times -2 -4 = -10$. When we reverse the order of those steps, applying them to
$U$ to get back to $A$, there is no such interference - all the multiples we
used during elimination show up in $L$:
The method I use here is needlessly inefficient since we call the
elimination method 3 times (since our inverse method calls it twice!), when it
is easy to simply inject the multiplying values into $L$ as we do a single pass
of elimination. However, at the moment this project is only about learning how
to write classes and methods and secondarily to solidify my grasp of linear
algebra. In the code we account for row exchanges by multiplying $E$ with $P$
(i.e. the permutation matrix), then taking the inverse of $E$ to get $L$.
Similarly, we ensure $L$ is lower triangular using $P$:
Demo
We create a matrix, call the lu method and print the result: