Geometric Interpretation of a 3D Matrix Inverse

I work a lot with 3D calculations, and every so often a non trivial 3D tidbit comes along. Some of these might be of use to others – and so, by the power vested in me as absolute monarch of this blog, I hereby extend its scope to some light 3D geometry. I’ll try to keep posts in this category less rigorous and yet more verbose than regular math write-ups.

Take a 3×3 matrix that you wish to invert, say M. Think of its columns as 3-dimensional vectors, A, B & C:

image

Now take it’s sought-after inverse, M-1, and think of its rows as 3D vectors, say v, u & w. That means essentially:

image

Next focus just on v, M-1 ‘s first row. What can be said of it, in terms of A, B & C?  Looking in the first row of the multiplication result – the identity matrix – we see:

image

Which means in particular that v is orthogonal to both B and C. Assuming B and C aren’t co-linear (otherwise M wouldn’t be invertible in the first place) there is but a single direction in 3D space which is perpendicular to both, and it can be written as B×C  – vector-product or cross-product of B and C.  Hence v must be a multiplication by some scalar – say α – of this direction:image

To deduce α remember the v must be normalized so that its dot product with A gives 1.  And so:image

The triple product in the denominator, A∙(B×C), should look familiar: that is in fact det(M) – the determinant of the original matrix. Had we inverted M with a more traditional apparatus, say Kramer’s rule, we would have divided by this determinant directly.

Naturally similar expressions are obtained for the other rows, u & w :

image

All the denominators are in fact equal, to each other and to det(M).

Why all the hassle?

First, for the fun of it. Personally I find it much easier to understand – and thus remember – geometric stories than algebraic ones.

Second, this formulation exposes several optimization opportunities.

  1. After computing B×C you can obtain the first of (and so all of) the denominators, by simply taking a dot product with A.
  2. If you need just a single row of the inverse matrix, you can calculate it directly – without having to invert the entire matrix.
    This is not as far fetched as it might seem: say you formulate a 3×3 linear equation set, but you’re actually interested only in the 1st solution coordinate:imageJust take the 1st row of M’s inverse, as outlined above, and dot-product it with b:
    image

Third, using analytical expressions as above for solving linear equations is generally preferable to numeric solvers. For matrices as small as 3×3, solving numerically would probably be a bad idea anyway – even traditional, tedious inverses (with adjoint matrices and all) would be preferable to numeric solutions.

BTW, higher dimensional analogues do exist – and are as easy to derive – but the main added value, namely direct geometric insight, is lost beyond three dimensions.

Advertisement
This entry was posted in 3D Geometry, Algorithms. Bookmark the permalink.

6 Responses to Geometric Interpretation of a 3D Matrix Inverse

  1. Anonymous says:

    enjoyed your math – which makes things look much more natuarl … BTW this solution is the trivial path for vector processing instructions such as e.g. provided by x86 SSE or the GPU Cuda/OPenCL

    • Ofek Shilon says:

      @Anon: thanks. From my own measurements I never saw noticeable performance improvements when implementing *3d vector* operations (dot, cross, etc) with SSE, but I’ll be happy to listen if you measured otherwise.
      Stay tuned, more of this stuff coming up.

  2. Shan says:

    Why does A*(BxC) = det(M)?
    My understanding of det(M) is the (signed) volume defined by M, which consists of A, B, & C;
    BxC gives an orthogonal vector to B & C;
    Then how does A*(BxC) yield the volume?

    • Ofek Shilon says:

      The geometric explanation is that BxC is not only orthogonal to B & C, it also has length equal to the area of the BxC parallelogram. Thus, A dot that vector gives the projection of A to the normal to B-C plane (i.e., the ‘height’ of the slanted box) times the area of the base – i.e., the volume.
      The algebric explanation is that BxC can be written (and is commonly defined as) a determinant of a matrix with unit vectors i, j, k at row 1, and B&C at rows 2&3. Taking a dot product of A with a vector (any vector) represented by i, j &k components amounts to replacing the i, j &k symbols with A’s coordinates (jot such a dot product down on a piece of paper and you’d see it immediately). Substituting A’s coordinated into i, j & k in the matrix defining BxC gives exactly the matrix M : A at the row 1 (where i-j-k were), B&C at rows 2&3. Thus, A*(BxC) is algebraically identical to det(M).

  3. ABHIRAM SINGH says:

    can you please provide geometric representation using diagram having coordinate system and how matrix and inverse look in this coordinate system.

    • Ofek Shilon says:

      The claim is that row 3 of the inverse is orthogonal to columns 1 & 2 of the matrix. A graphic representation would show one vector perpendicular to two others, something like this: https://ofekshilon.com/ortb/. Illustrating this for all 3 rows would just be visually cumbersome.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s