BitWorking

Geometric Algebra applied to Physics

Geometric Algebra can be applied to Physics, and many of the introductions to GA online cover this, but they immediately jump to electromagnetic fields or quantum mechanics, which is unfortunate since GA can also greatly simplify 2D kinematics. One such example is uniform circular motion.

You should be familiar with all the concepts presented in An Introduction to Geometric Algebra over R^2 before proceeding.

If we have a vector p that moves at a constant rate of ω rad/s and has a starting position p0, then we can describe the vector p very easily:

\bm{p} = \bm{p_0} e^{\omega t \bm{I}}

Let's figure out what the derivative of a Rotor looks like, by first recalling its definition:

 e^{\theta \bm{I}} := \cos(\theta) + \sin(\theta)\bm{I}

We take the derivative with respect to θ:


        \begin{align*}
          \frac{d}{d \theta} e^{\theta \bm{I}} &=  \frac{d}{d \theta} (\cos(\theta) + \sin(\theta)\bm{I}) \\
            &=  -\sin(\theta) + \cos(\theta)\bm{I} \\
        \end{align*}

At this point observe that cos and sin just changed places, along with a sign change, but we know of another operation that does the same thing, which is multiplication by I, so we get:


        \begin{align*}
          \frac{d}{d \theta} e^{\theta \bm{I}} &= \frac{d}{d \theta} (\cos(\theta) + \sin(\theta)\bm{I}) \\
            &= -\sin(\theta) + \cos(\theta)\bm{I}          \\
            &= \bm{I} (\cos(\theta) + \sin(\theta)\bm{I})  \\
            &= \bm{I} e^{\theta \bm{I}}                    \\
        \end{align*}

Not only does the derivative have a nice neat expression, we can read off from the formula what is happening, which is that the derivative is a vector that is rotated 90 degrees from the original vector. Also note that normally the geometric product ins't commutative, but in this case both parts are rotors, so the order doesn't matter.

We can go through the same process to show what happens if θ has a constant multiplier k:


        \begin{align*}
          \frac{d}{d \theta} e^{k \theta \bm{I}} &= \frac{d}{d \theta} (\cos(k \theta) + \sin(k \theta)\bm{I}) \\
            &= k \bm{I} e^{k \theta \bm{I}} \\
        \end{align*}

With our new derivative in hand we can now find the velocity vector for our position vector p, since velocity is just the derivative of position with respect to time.


        \begin{align*}
        \bm{v}  &= \frac{d}{dt} \bm{p} \\
                &= \frac{d}{dt} \bm{p_0} e^{\omega t \bm{I}} \\
                &= \bm{p_0} \omega \bm{I}  e^{\omega t \bm{I}} \\
                &= \omega \bm{p_0} \bm{I} e^{\omega t \bm{I}} \\
        \end{align*}

Again, because we using Geometric Algebra, we can read off what is going on geometrically from the formula, that is, the derivative is a vector orthogonal to the position vector that is scaled by ω.

Note that we've drawn the vector as starting from the position, but that's not required.

We get the acceleration vector in the same manner, by taking the derivative of the velocity vector with respect to time.


        \begin{align*}
        \bm{a}  &= \frac{d}{dt} \bm{v}                                      \\
                &= \frac{d}{dt} \omega \bm{p_0} \bm{I} e^{\omega t \bm{I}}  \\
                &= \omega \bm{p_0} \bm{I} \omega \bm{I} e^{\omega t \bm{I}} \\
                &= \omega^2 \bm{p_0} \bm{I} \bm{I} e^{\omega t \bm{I}}      \\
                &= - \omega^2 \bm{p_0} e^{\omega t \bm{I}}                  \\
        \end{align*}

And again we can just read off from the formula what is going on geometrically, which is that we end up with a vector that is rotated 180 degrees from the position vector, and scaled by ω2.

We can place the acceleration and velocity vectors as starting from the positition vector, and that looks like:

Note how simple this was to derive and that the geometric interpretation could be read off of the resulting formulas. We didn't need to leave the 2D plane, that is, all of these calculations took place in 𝔾2. The more classical derivations for uniform circular motion rely on the cross-product which takes you out of ℝ2 into ℝ3 and which doesn't work in higher level dimensions.

2017-01-01

An Introduction to Geometric Algebra over R^2

Geometric Algebra is fascinating, and I believe solves a large number of problems that arise from a more traditional approach to vectors, but I've been very disappointed with the quality of books and explanations I've found, most of them zooming off into abstract realms too quickly, or spending an inordinate amount of time building up a generalized theory before finally getting to something useful.

Below is an explanation of Geometric Algebra that will start with a simple two dimensional vector space, i.e. ℝ2. This will be a concise introduction to 𝔾2, the Geometric Algebra over ℝ2, and then quickly pivot to applications in 𝔾2. This introduction will not cover the fascinating history of GA, Clifford Algebras, or Hermann Grassman.

I'll presume a famialarity with Linear Algebra, and then we'll introduce the geometric product on that and we'll have the Geometric Alegebra over two dimensions: 𝔾2.

Linear Algebra

Linear algebra is the branch of mathematics concerning vector spaces and linear mappings between such spaces. It includes the study of lines, planes, and subspaces, but is also concerned with properties common to all vector spaces. -Wikipedia

You should be familiar with the following axioms and definitions from Linear Algebra:

(\bm{a} + \bm{b}) + \bm{c} = \bm{a} + (\bm{b} + \bm{c}) Associative (1)
\bm{a} + \bm{b}  = \bm{b} + \bm{a} Commutative (2)
\bm{0} + \bm{b}  = \bm{b} Identity (3)
\bm{-a} + \bm{a} = \bm{0} Inverse (4)
c(\bm{a} + \bm{b}) = c\bm{a} + c\bm{b} Scalar Distributive (5)
1 \bm{b}  = \bm{b} Multiplicative Identity (6)
\bm{a} \cdot \bm{b} = ||\bm{a}|| ||\bm{b}|| \cos \theta Dot/Inner Product (7)
\bm{a} \cdot \bm{b} = \sum_{i}\bm{a_i}\bm{b_i} Dot/Inner Product (Alternate) (8)

In particular, for ℝ2 we have an orthonormal basis:

 \bm{e_{1}} := (1,0)
 \bm{e_{2}} := (0,1)

where:

 \bm{e_{1}} \perp \bm{e_{2}}

We know how to do vector addition and scalar multiplication of vectors, and that any vector can be represented as a linear combination of basis elements.


        \begin{align*}
          \bm{a} &= -1 \bm{e_{1}} + 2 \bm{e_{2}} \\
          \bm{b} &= 2 \bm{e_{1}} + 3 \bm{e_{2}} \\
          \bm{a} + \bm{b} &=  1 \bm{e_1} + 5 \bm{e_2}
        \end{align*}

Things to remember about the dot product, or inner product, is that it is 0 for orthogonal vectors:

 \bm{e_{1}} \perp \bm{e_{2}} \implies \bm{e_1} \cdot \bm{e_2} = 0

And that a vector dot with itself gives the square of the norm of the vector, since \cos 0 = 1:

 \bm{a} \cdot \bm{a} = {||\bm{a}||}^2

One important thing to notice about Linear Algebra is how often you have to step outside of ℝ2 to get work done. That is, operations frequently have to take place outside ℝ2 or those operations give you results outside of ℝ2. For example, the dot product of two vector returns a scalar, which is not a member of ℝ2.

 \bm{a} \cdot \bm{b} := ||\bm{a}|| ||\bm{b}|| \cos(\theta)

Similarly, to rotate vectors you have to create matrices, which don't exist in ℝ2, and apply them to vectors through matrix multiplication.

One final example is the cross-product, which takes two vectors and operates on them to produce a vector that is orthogonal to the original two vectors, but if you are in ℝ2 it doesn't exist, you have to then view that cross-product vector as existing in ℝ3, which the original ℝ2 is embedded in.

All of this stands in stark contrast to 𝔾2, where these operations take place in 𝔾2, in fact, many of the constructs we use in Linear Algebra, such as rotations, exist as elements of 𝔾2, and applying those operations is just a matter of taking the geometric product of those objects. Not only is 𝔾2 closed under many of these operations, but the operations exist as elements in 𝔾2.

Geometric Algebra

The Geometric Algebra of 𝔾2 builds upon ℝ2, extending it by adding multiplication, i.e. a geometric product. Before we get to the geometric product we need to first quickly learn about the exterior product.

Exterior Product

The exterior product operates on two vectors and is written as:

\bm{a} \wedge \bm{b}

The exterior product represents the oriented area defined by the two vectors, or more precisely is represents an oriented area in the plane defined by those vectors, also known as a bivector. There are two important aspects of this, the first is that the exact shape doesn't matter. For example, the bivectors represented below are equal because they have the same orientation (counter-clockwise) and the same area (3).

 (1, 0) \wedge (0, 3) =  (3, 0) \wedge (0, 1)

The second important factor is that the exterior product is anticommutative, that is, if you reverse the order of the vectors involved then the sign of the exterior product changes.

\bm{a} \wedge \bm{b} = - \bm{b} \wedge \bm{a}

Using two of the vectors above, note that the order that they are used in the exterior product will make the bivectors either clockwise or counter-clockwise.

The properties of the exterior product are:

(\bm{a} \wedge \bm{b}) \wedge \bm{c} = \bm{a} \wedge (\bm{b} \wedge \bm{c}) Associative (1)
c(\bm{a} \wedge \bm{b}) = c\bm{a} \wedge \bm{b} = \bm{a} \wedge c\bm{b} Scalar Associativity (2)
\bm{a} \wedge (\bm{b} + \bm{c}) = \bm{a} \wedge \bm{b} + \bm{a} \wedge \bm{c} Left Distributive (3)
(\bm{a} + \bm{b}) \wedge \bm{c} = \bm{a} \wedge \bm{c} + \bm{b} \wedge \bm{c} Right Distributive (4)
\bm{a} \wedge \bm{b} = -\bm{b} \wedge \bm{a} Anti-symmetric (5)
\bm{a} \parallel \bm{b} \Rightarrow \bm{a} \wedge \bm{b} = 0 Zero for Parallel Vectors. (6)

In what is going to become a recurring theme, let's look at what this means in terms of basis vectors. Since any vector can be written as a linear combination of basis vectors we get:


        \begin{align*}
          \bm{a} &= a_1 \bm{e_{1}} + a_2 \bm{e_{2}} \\
          \bm{b} &= b_1 \bm{e_{1}} + b_2 \bm{e_{2}}
        \end{align*}

If we take their exterior product we get:


        \begin{align*}
        \bm{a} \wedge \bm {b} &= (a_1 \bm{e_{1}} + a_2 \bm{e_{2}}) \wedge (b_1 \bm{e_{1}} + b_2 \bm{e_{2}}) \\
                              &= a_1 b_1 \bm{e_{1}} \wedge \bm{e_{1}}
                               + a_1 b_2 \bm{e_{1}} \wedge \bm{e_{2}}
                               + a_2 b_1 \bm{e_{2}} \wedge \bm{e_{1}}
                               + a_2 b_2 \bm{e_{2}} \wedge \bm{e_{2}}               & \text{via 1} \\
                              &=  0 + a_1 b_2 \bm{e_{1}} \wedge \bm{e_{2}}
                              + a_2 b_1 \bm{e_{2}} \wedge \bm{e_{1}} + 0            & \text{via 6} \\
                              &=  a_1 b_2 \bm{e_{1}} \wedge \bm{e_{2}}
                              - a_2 b_1 \bm{e_{1}} \wedge \bm{e_{2}}                & \text{via 5} \\
                              &= ( a_1 b_2  - a_2 b_1 )\bm{e_{1}} \wedge \bm{e_{2}} & \text{via 2}
        \end{align*}

So the exterior product of any two vectors can expressed as just a scalar mulitple of e1^e2

Geometric Product

Now that we know about the exterior product, we can define the geometric product, which is just the sum of the inner product and the exterior product:

\bm{a} \bm{b} := \bm{a} \cdot \bm{b} +\bm{a} \wedge \bm{b}

Using just the above definition you can show that the geometric product has the following properties:

(\bm{a} \bm{b}) \bm{c} = \bm{a} (\bm{b} \bm{c}) Associative (1)
c(\bm{a} \bm{b}) = c\bm{a} \bm{b} = \bm{a} c \bm{b} Scalar Associativity (2)
\bm{a} (\bm{b} + \bm{c}) = \bm{a} \bm{b} + \bm{a} \bm{c} Left Distributive (3)
(\bm{a} + \bm{b}) \bm{c} = \bm{a} \bm{c} + \bm{b} \bm{c} Right Distributive (4)
\bm{a} \parallel \bm{a} \Rightarrow \bm{a} \bm{a} = \bm{a} \cdot \bm{a} = ||\bm{a}|| Norm (5)
\bm{a} \bm{b} \neq \bm{b} \bm{a} Non-Commutative, except in some cases. (6)
\bm{a} \neq 0 \Rightarrow \bm{a} (\frac{1}{||\bm{a}||^2} \bm{a}) = 1 Vector Inverses (7)
\bm{a} \perp \bm{b} \Rightarrow \bm{a} \bm{b} = \bm{a} \wedge \bm{b} Orthogonal vector multiplication. (8)

With the geometric product as defined above, and vector addition, our Geometric Algebra 𝔾2 forms a unital associative algebra with an orthonormal basis:


        1, \bm{e_1}, \bm{e_2}, \bm{e_{1} e_{2}}

We can work out a multiplication table for the basis elements, with the observation that if two elements are orthogonal then their dot product is zero, so that implies that the geometric product reduces to the exterior product between orthogonal vectors, which is anti-symmetric. So that implies for each of our basis vectors:


        \bm{e_1} \bm{e_2} = \bm{e_1} \wedge \bm{e_2}

And that implies, by the anti-symmetry of the exterior product:


        \bm{e_1} \bm{e_2} = - \bm{e_2} \bm{e_1}

And the geometric product of any basis element with itself, because they are parallel means the exterior product is zero, so:


        \bm{e_1} \bm{e_1} = \bm{e_1} \cdot \bm{e_1} = ||\bm{e_1}||^2 = 1

Note that we'll end up writing a lot of equations with basis vectors multiplied together, so it's useful to have a shorthand, i.e. e12 will be used as a short-hand for e1 e2.

We can now complete a multiplication table for the geometric product of all the basis elements:


    \begin{table}[]
    \centering
    \begin{tabular}{l|llll}
           & 1       & e_1     & e_2    & e_{12} \\ \hline
    1      & 1       & e_1     & e_2    & e_{12} \\
    e_1    & e_1     & 1       & e_{12} & e_2    \\
    e_2    & e_2     & -e_{12} & 1      & -e_1   \\
    e_{12} & e_{12}  & -e_2    & e_1    & -1
    \end{tabular}
    \end{table}

Now that we know what elements of 𝔾2 look like and how to manipulate them, it's now time to put them to work.

Applying Geometric Algebra

Multiplying Vectors

Let's start by multiplying two vectors:


        \begin{align*}
          \bm{a} &= a_1 \bm{e_{1}} + a_2 \bm{e_{2}} \\
          \bm{b} &= b_1 \bm{e_{1}} + b_2 \bm{e_{2}}
        \end{align*}

Under the geometric product we get:


        \begin{align*}
        \bm{a} \bm {b} &= (a_1 \bm{e_{1}} + a_2 \bm{e_{2}})  (b_1 \bm{e_{1}} + b_2 \bm{e_{2}}) \\
                              &= a_1 b_1 \bm{e_{1}}  \bm{e_{1}}
                               + a_1 b_2 \bm{e_{1}}  \bm{e_{2}}
                               + a_2 b_1 \bm{e_{2}}  \bm{e_{1}}
                               + a_2 b_2 \bm{e_{2}}  \bm{e_{2}} \\
                              &=  a_1 b_1 + a_1 b_2 \bm{e_{1}}  \bm{e_{2}}
                              + a_2 b_1 \bm{e_{2}}  \bm{e_{1}} + a_2 b_2 \\
                              &=  a_1 b_1 + a_2 b_2 + a_1 b_2 \bm{e_{12}}
                              - a_2 b_1 \bm{e_{12}} \\
                              &= (a_1 b_1 + a_2 b_2) + (a_1 b_2  - a_2 b_1) \bm{e_{12}}
        \end{align*}

We can see that from the product of two vectors we get a scalar and a bivector.

What if we take a scalar and a bivector and multiply it by a vector? Note that below we are using a capital letter for our scalar plus bivector.


        \begin{align*}
          \bm{a} &= a_1 \bm{e_{1}} + a_2 \bm{e_{2}} \\
          \bm{B} &= B_1 + B_2 \bm{e_{12}}
        \end{align*}

        \begin{align*}
        \bm{a} \bm {B} &= (a_1 \bm{e_{1}} + a_2 \bm{e_{2}})  (B_1  + B_2 \bm{e_{12}}) \\
                          &= a_1 B_1 \bm{e_{1}}
                           + a_1 B_2 \bm{e_{1}}  \bm{e_{12}}
                           + a_2 B_1 \bm{e_{2}}
                           + a_2 B_2 \bm{e_{2}}  \bm{e_{12}} \\
                           &= a_1 B_1 \bm{e_{1}}
                           + a_2 B_1 \bm{e_{2}}
                           + a_1 B_2 \bm{e_{1}}  \bm{e_{12}}
                           + a_2 B_2 \bm{e_{2}}  \bm{e_{12}} \\
                           &= a_1 B_1 \bm{e_{1}}
                           + a_2 B_1 \bm{e_{2}}
                           + a_1 B_2 \bm{e_{2}}
                           - a_2 B_2 \bm{e_{1}} \\
                           &= ( a_1 B_1  - a_2 B_2 )\bm{e_{1}}
                           + ( a_2 B_1 + a_1 B_2 ) \bm{e_{2}}  \\
        \end{align*}

That product gives us back a vector, so B is an element of 𝔾2 that operates on vectors through the geometric product to give us another vector.

Rotors

A special case of B is called a Rotor. This Rotor is an element of 𝔾2 that is just a restatement of Euler's formula in 𝔾2.

First, for reasons that will become clearer later, we will begin to abbreviate e12 as I. Our Rotor is then defined as:

 e^{\theta \bm{I}} := \cos(\theta) + \sin(\theta)\bm{I}

If you multiply any vector by this Rotor on the right it will rotate that vector θ degrees in the direction from e1 to e2. If you multiply that same vector on the left by this Rotor it will be rotated θ degrees in the opposite direction.

For example, here is a dynamic illustration of the Rotor in action, In this case, we are multiplying e1 by eωtI, where t is time, and ω is the rate, in radians per second, that the vector undergoes rotation. In this example we set ω = 1, so the vector should complete a full circle every 2π seconds.

\bm{v} = \bm{e_1} e^{t \bm{I}}

Caveat: Rotors only work like this in ℝ2, in ℝ3 and above the formulation changes, so be aware of that.

Using geometric algebra makes it easy to read off this formula and determine what is going to happen, i.e. the e1 vector is going to be operated on via geometric product and the result will be another vector that is rotated ω t radians in a counter-clockwise direction.

Since our Rotator is a member of 𝔾2 it can be combined with other operations. For example, we could start with a vector p at an initial position and then perturb it by adding it to another vector that is multiplied by our Rotor. In this case we set ω = 2.

\bm{v} = \bm{p} + 0.5 \bm{e_1} e^{2 t \bm{I}}

We can take that one step further and rotate the whole thing around the origin, where we set ω1 = 2.9 and ω2 = 1.

\bm{v} = (\bm{p} + 0.5 \bm{e_1} e^{\omega_1 t \bm{I}})e^{\omega_2 t \bm{I}}

That might be easier to follow if instead of drawing the vector we draw the trail of points where the vector has been.

Double Angle Formula

Some of the power of Geometric Algebra comes from being able to go back and forth between looking at a problem geometrically and looking at it algrebraically. For example, it is easy to reason that rotating a vector θ degrees twice is the same as rotating that same vector 2 θ degrees. We can write that out as an algebraic expression:


        e^{2 \theta \bm{I}} = e^{\theta \bm{I}} e^{\theta \bm{I}}

If we expand both sides of the equations above using the definition of e we get:


        \begin{align*}
        \cos 2 \theta + \sin 2 \theta \bm{I} &= (\cos \theta + \sin \theta \bm{I}) (\cos \theta + \sin \theta \bm{I}) \\
           &= \cos^2 \theta + 2 \cos \theta \sin \theta \bm{I} + \sin^2 \theta \bm{I}^2 \\
           &= \cos^2 \theta + 2 \cos \theta \sin \theta \bm{I} - \sin^2 \theta \\
           &= \cos^2 \theta - \sin^2 \theta  + 2 \cos \theta \sin \theta \bm{I}
        \end{align*}

Comparing the coefficients on the left hand side of the equation to that on the right hand side we find we have derived the Double Angle Formulas:


        \cos 2 \theta = \cos^2 \theta  - \sin^2 \theta

        \sin 2 \theta = 2 \cos \theta \sin \theta

You could start with the same geometric reasoning about any two angles, α and β, and use the same derivation to get the general Angle sum identities. The power here is the ability to move back and forth between algebraic and geometric reasoning quickly and easily.

Complex Numbers

From our definition of our Rotator, if we set ω to 90 degrees then since cos becomes 0 we are left with only I, which is a 90 degree Rotator. But if we apply a 90 degree Rotator twice we should get a 180 degree Rotator:


        \begin{align*}
        \bm{I} \bm{I} &= \bm{e_{12}} \bm{e_{12}} \\
                      &= \bm{e_1} \bm{e_2} \bm{e_1} \bm{e_2} \\
                      &= - \bm{e_1} \bm{e_1} \bm{e_2} \bm{e_2} \\
                      &= - 1 \bm{e_2} \bm{e_2} \\
                      &= - 1 \\
        \end{align*}

And -1 is exactly what we would expect, since that's what you multiply a vector by to rotate it 180 degrees. But what we also have is a quantity in 𝔾2 that when squared is equal to -1. This should remind you of i in the complex numbers ℂ, but without the need to take the square root of a negative number, or invoke anything imaginary. In fact the subset of all linear combinations of {1, I} is closed under the geometric product and is isomorphic to ℂ.

Characterizing B

Now that we have learned about Rotors, let's apply that knowledge to characterize elements of the form:


          \bm{B} &= B_1 + B_2 \bm{e_{12}}

First, let's look at the relationship between any two non-zero vectors.

We can reason out geometrically that given b we can get a from it by first scaling b to have a norm of 1, then rotating it to have the same direction as a, and then finally scaling that unit vector to have the same length as a. Now write that out algrebraically, where θ is the angle between the two vectors.


        \begin{align*}
        \bm{a} &= ||\bm{a}|| e^{\theta \bm{I}} \frac{1}{||\bm{b}||} \bm{b} \\
               &= \frac{||\bm{a}||}{||\bm{b}||} e^{\theta \bm{I}} \bm{b}
        \end{align*}

If we look at any product of two non-zero vectors, ab, we know we get an operator that, under the geometric product, takes vectors and returns new vectors. If we substitute our derivation of how to get a from b, then we get:


        \begin{align*}
        \bm{ab} &= \frac{||\bm{a}||}{||\bm{b}||} e^{\theta \bm{I}} \bm{b} \bm{b} \\
                &= \frac{||\bm{a}||}{||\bm{b}||} e^{\theta \bm{I}} ||\bm{b}||^2  \\
                &= ||\bm{a}|| ||\bm{b}|| e^{\theta \bm{I}}
        \end{align*}

So every such operator ab is actually just a rotation and a dilation. We can see this in action if we have the operator ab and apply it to vector c to get vector d. The animation will perturb vector b to show how that affects vector d.


        \bm{d} = \bm{ab}\bm{c}

Our generalized form for the geometric product of two vectors is:


          \bm{B} &= B_1 + B_2 \bm{e_{12}}

We can use what we've learned so far to break that apart into its scalar and Rotor components:


        \bm{B} = k e^{\theta \bm{I}}

Start by applying B to a unit basis element, which we know has a norm of 1, which gives us a new vector v.


        \begin{align*}
        \bm{v} &= \bm{B}\bm{e_1} \\
               &= k e^{\theta \bm{I}} \bm{e_1}
        \end{align*}

We can see from the last equation that v has a norm of k, and now that we know k, we can divide B by k to get our Rotor.


        \begin{align*}
        k &= ||\bm{B}\bm{e_1}|| = \sqrt{B_{1}^{2} + B_{2}^{2}} \\
        e^{\theta \bm{I}} &= \frac{1}{k} \bm{B}
        \end{align*}

Ratios

While applying the operator ab above did show some of the behavior, it may be useful to start over, this time building our operator from a ratio, i.e. if we have two vectors a and b, and given a third vector c, we'd like to calculate the vector d so that they have the same ratio, i.e.


         \bm{d} / \bm{c} = \bm{b} / \bm{a}}

The geometric product isn't commutative, so we have to choose a side to do the division on, so we will write this as:


         \bm{d}\bm{c^{-1}} = \bm{b}\bm{a^{-1}}

But that's just a simple algrebraic equation we can solve by multiplying both sides by c.


        \begin{align*}
        \bm{d}\bm{c^{-1}}       &= \bm{b}\bm{a^{-1}}       \\
        \bm{d}\bm{c^{-1}}\bm{c} &= \bm{b}\bm{a^{-1}}\bm{c} \\
        \bm{d} &= \bm{b}\bm{a^{-1}}\bm{c}
        \end{align*}

The operator ba-1 should preserve the angle between a and b, and also dilate d proportionally to the norms of a and b. The following animation shows that relationship, also perturbing b to show the affect on d.

Conjugates and Inverses

Let's see what the difference between ab and ba is. First let's multiply out in terms of basis vectors:


        \begin{align*}
        \bm{a} \bm {b} &= (a_1 \bm{e_{1}} + a_2 \bm{e_{2}})  (b_1 \bm{e_{1}} + b_2 \bm{e_{2}}) \\
                              &= a_1 b_1 \bm{e_{1}}  \bm{e_{1}}
                               + a_1 b_2 \bm{e_{1}}  \bm{e_{2}}
                               + a_2 b_1 \bm{e_{2}}  \bm{e_{1}}
                               + a_2 b_2 \bm{e_{2}}  \bm{e_{2}} \\
                              &=  a_1 b_1 + a_1 b_2 \bm{e_{1}}  \bm{e_{2}}
                              + a_2 b_1 \bm{e_{2}}  \bm{e_{1}} + a_2 b_2 \\
                              &=  a_1 b_1 + a_2 b_2 + a_1 b_2 \bm{e_{12}}
                              - a_2 b_1 \bm{e_{12}} \\
                              &= (a_1 b_1 + a_2 b_2) + (a_1 b_2  - a_2 b_1) \bm{I}
        \end{align*}

If we swap a and b we get:


        \begin{align*}
        \bm{b} \bm {a} &= (b_1 a_1 + b_2 a_2) + (b_1 a_2 - b_2 a_1) \bm{I} \\
                       &= (a_1 b_1 + a_2 b_2) + (b_1 a_2 - b_2 a_1) \bm{I} \\
                       &= (a_1 b_1 + a_2 b_2) + (a_2 b_1 - a_1 b_2) \bm{I} \\
                       &= (a_1 b_1 + a_2 b_2) - (a_1 b_2 - a_2 b_1) \bm{I} \\
        \end{align*}

In that last step we just factor out a -1 from the coefficient of I. If we substitute:


        \begin{align*}
        B_1  &= a_1 b_1 + a_2 b_2 \\
        B_2  &= a_1 b_2 - a_2 b_1
        \end{align*}

Then we get:


        \begin{align*}
        \bm{a} \bm {b} &= B_1 + B_2 \bm{I} \\
        \bm{b} \bm {a} &= B_1 - B_2 \bm{I}
        \end{align*}

So if we reverse the order of the geometric product of our vectors we end up with the equivalent of the complex conjugate.

We will note the reverse of the product of two vectors with the dagger. While this maps to the conjugate in 𝔾2, reversing a product of multiple vectors will be more important and powerful in 𝔾3.


        \begin{align*}
        \bm{B}            &= \bm{a} \bm{b}  &= B_1 + B_2 \bm{I} \\
        \bm{B}^{\dagger}  &= \bm{b} \bm{a}  &= B_1 - B_2 \bm{I}
        \end{align*}

If we multiply them together we find:


        \begin{align*}
        \bm{B} \bm{B}^{\dagger} &= \bm{abba}  \\
              &= \bm{a} ||\bm{b}||^2 \bm{a}     \\
              &= ||\bm{b}||^2 \bm{aa}         \\
              &= ||\bm{b}||^2 ||\bm{a}||^2    \\
              &= ||\bm{a}||^2 ||\bm{b}||^2
        \end{align*}

Their product just ends up being a scalar, so if divide by that scalar value we should get:


        \begin{align*}
        \bm{B} \frac{\bm{B}^{\dagger}}{||\bm{a}||^2 ||\bm{b}||^2} &= \frac{||\bm{a}||^2 ||\bm{b}||^2}{  ||\bm{a}||^2 ||\bm{b}||^2} \\
              &= 1
        \end{align*}

Which means we've found the multiplicative inverse of B.


        \bm{B}^{-1} = \frac{\bm{B}^{\dagger}}{\bm{B}\bm{B^{\dagger}}

Normally geometric products aren't commutative, but in this case we can see that we get the same result when we reverse the order of B and B dagger:


        \begin{align*}
        \bm{B}^{\dagger}\bm{B} &= \bm{baab}  \\
              &= \bm{b} ||\bm{a}||^2 \bm{b}     \\
              &= ||\bm{a}||^2 \bm{bb}         \\
              &= ||\bm{a}||^2 ||\bm{b}||^2
        \end{align*}

So our inverse will work whether applied on the left or on the right.

Let's see how that inverse operates by applying it to our previous ratio example. This time we'll not only apply the ba-1 operator, but also apply it's inverse to c to see how it compares.


        \begin{align*}
        \bm{d}  &= \bm{b}\bm{a^{-1}}\bm{c}         \\
        \bm{d'} &= (\bm{b}\bm{a^{-1}})^{-1}\bm{c}
        \end{align*}

Note that starting from conjugates isn't the only way to construct such an inverse, we could, for example, note that because each non-zero vector has a multiplicative inverse, we can come to the same conclusion:


        \begin{align*}
        1 &= \bm{a} (\frac{\bm{a}}{||\bm{a}||^2})                                        \\
          &= \bm{a} 1 (\frac{\bm{a}}{||\bm{a}||^2})                                      \\
          &= \bm{a} (\bm{b} (\frac{\bm{b}}{||\bm{b}||^2}) (\frac{\bm{a}}{||\bm{a}||^2}) \\
          &= \bm{ab} (\frac{\bm{ba}}{||\bm{a}||^2||\bm{b}||^2})                            \\

Further Reading

There are other introductions to GA around the web, some of the ones I've found helpful are:

2016-12-21

Pat McCrory in the context of elite overproduction.

As we head into the fourth week of Pat McCrory's failure to conceeed in the NC gubernatorial race, it's important to look at McCrory's infantile behavior in a larger context. While it would be tempting to try to psychoanalyze his continued intransigence as yet another man-child temper tantrum, there are larger forces at work, of which McCrory is just one sad symptom.

The root of the problem stems from the ever widening wealth gap and subsequent elite overproduction. As more and more millionaires and billionaires are minted they seek to convert their newfound wealth into political power. But the levers of power are limited, there's a finite number of House and Senate seats, there are only 50 governors, and only one President. No new power levers are being created, yet there are more and more people scrambling for them. More and more millionaires think they should be on the city council or run for their state legislature, while more billionaires think they too should be President. In a simple example of the law of supply and demand, you can see the price of running for office rising steeply over the last 40 years, with the cost of running for President exceeding $2 billion in 2012.

But what happens when the demand outstrips the supply so that no manner of money can buy you that cherished lever of power? When there are multiple millionaires, each backed by a group of billionaires, all vying for power? What do you pay then? You pay in social norms. Common decency. The destruction of these are the price you pay. Pat McCrory, in a desperate bid to retain his power, is willing to violate every norm of U.S. democracy and attempt to destroy all faith in the election process, the same process that put him in power four years ago. If Pat McCrory can't be governor, well then, he might as well burn down the entire edifice so no one else can be either.

You can also see this playing out on the national stage, with Donald Trump willing to violate every norm, digging deep into the veins of xenophobia, racism, and bigotry to propel himself into office. Under normal circumstances no politician would openly court the worse side of the human tribal instinct, the horrors from the last rise of fascism leading up to WWII have been too close and too fresh in memory. But time has passed, the last survivors of WWII are dying out, the reality of tens of millions of people dying in wars, revolutions, and pogroms are just dry history lessons now, not to be considered in the raw and ugly scramble for power.

So don't blame Pat McCrory, as infantile and destructive as his behavior has become, he is just a symptom of a much larger breakdown in political norms, and these are just a couple steps along a longer arc of societal disintegration. We're already seeing the normal U.S. two-party system fragment into five distinct parties; the neo-liberal wing of the Democratic party as exemplified by Hillary Clinton, the populist wing of Bernie Sanders followers, the traditional big business GOP, the Tea Party republicans, and finally the Trumpers. And this isn't the end of the disruption, merely the beginning. Layer on global warming, the continued disruption of technology, and the world wide migration of people from rural areas into cities and we have all the ingredients for massive upheaval. Will we descend into our own Cultural Revolution, shatter the country in another Civil War, or will the similar rise of fascism across Europe lead to another world engulfing spasm of death and desctruction? The patterns are all there, the roots of the problem can be clearly mapped out, and while there's no guaranteed way to avoid the coming disintegration, maybe we should at least try.

2016-12-05

Surely we've seen this before. Not.

You might think that as the U.S. moves from an industrial and manufacturing based economy to a knowledge based economy that we surely have weathered similar tranistions. For example, as we moved from an agricultural economy to a manufacturing based one. While we did indeed weather the same changes, the vital difference is the timescale over which those changes took place.

As you can see from this data on agricultural employment, we did experience a loss of 6 million agricultural jobs over a 50 year period from 1910 to 1960. In contrast, from Voter anger explained—in one chart, you can see that the U.S. economy also lost 6 million manufacturing jobs, but this time in just 10 years, from 2000 to 2010.

2016-11-19