I used to have trouble understanding why inverting an matrix required the same order of operations as solving an system of linear equations. Specifically, if is an invertible matrix and is a length vector, then computing and solving the equation can both be done in floating point operations (flops). This surprised me because naively computing the columns of requires solving the systems of equations
where are the standard basis vectors. I thought this would mean that calculating would require roughly times as many flops as solving a single system of equations. It was only in a convex optimization lecture that I realized what was going on.
To solve a single system of equations , we first compute a factorization of such as the LU factorization. Computing this factorization takes flops but once we have it, we can use it solve any new system with flops. Now to solve , we can simply factor once and the perform solves using the factorization each of which requires an addition flops. The total flop count is then . Inverting the matrix requires the same order of flops as a single solve!
Of course, as John D Cook points out: you shouldn’t ever invert a matrix. Even if inverting and solving take the same order of flops, inverting is still more computational expensive and requires more memory.
- Concavity of the squared sum of square roots is about an interesting homework problem from the same convex optimization course.