Geometric Algebra is fascinating, and I believe solves a large number of problems that arise from a more traditional approach to vectors, but I've been very disappointed with the quality of books and explanations I've found, most of them zooming off into abstract realms too quickly, or spending an inordinate amount of time building up a generalized theory before finally getting to something useful.

Below is an explanation of Geometric Algebra that will start with a simple two
dimensional vector space, i.e. ℝ^{2}. This will be a concise
introduction to 𝔾^{2}, the Geometric Algebra over ℝ^{2}, and
then quickly pivot to applications in 𝔾^{2}. This introduction will
not cover the fascinating history of GA,
Clifford Algebras, or
Hermann Grassman.

I'll presume a familiarity with
Linear Algebra, and
then we'll introduce the geometric product on that and we'll have the
Geometric Alegebra over two dimensions: 𝔾^{2}.

### Linear Algebra

Linear algebra is the branch of mathematics concerning vector spaces and linear mappings between such spaces. It includes the study of lines, planes, and subspaces, but is also concerned with properties common to all vector spaces. -Wikipedia

You should be familiar with the following axioms and definitions from Linear Algebra:

$$(\boldsymbol{a} + \boldsymbol{b}) + \boldsymbol{c} = \boldsymbol{a} + (\boldsymbol{b} + \boldsymbol{c})$$ | Associative | (1) |

$$\boldsymbol{a} + \boldsymbol{b} = \boldsymbol{b} + \boldsymbol{a} $$ | Commutative | (2) |

$$\boldsymbol{0} + \boldsymbol{b} = \boldsymbol{b} $$ | Identity | (3) |

$$\boldsymbol{-a} + \boldsymbol{a} = \boldsymbol{0}$$ | Inverse | (4) |

$$c(\boldsymbol{a} + \boldsymbol{b}) = c\boldsymbol{a} + c\boldsymbol{b} $$ | Scalar Distributive | (5) |

$$1 \boldsymbol{b} = \boldsymbol{b} $$ | Multiplicative Identity | (6) |

$$\boldsymbol{a} \cdot \boldsymbol{b} = ||\boldsymbol{a}|| ||\boldsymbol{b}|| \cos \theta$$ | Dot/Inner Product | (7) |

$$\boldsymbol{a} \cdot \boldsymbol{b} = \sum_{i}\boldsymbol{a_i}\boldsymbol{b_i} $$ | Dot/Inner Product (Alternate) | (8) |

In particular, for ℝ^{2} we have an orthonormal basis:

- $$ \boldsymbol{e_{1}} := (1,0) $$
- $$ \boldsymbol{e_{2}} := (0,1) $$

where:

- $$ \boldsymbol{e_{1}} \perp \boldsymbol{e_{2}} $$

We know how to do vector addition and scalar multiplication of vectors, and that any vector can be represented as a linear combination of basis elements.

- $$ \begin{align*} \boldsymbol{a} &= -1 \boldsymbol{e_{1}} + 2 \boldsymbol{e_{2}} \\ \boldsymbol{b} &= 2 \boldsymbol{e_{1}} + 3 \boldsymbol{e_{2}} \\ \boldsymbol{a} + \boldsymbol{b} &= 1 \boldsymbol{e_1} + 5 \boldsymbol{e_2} \end{align*} $$

Things to remember about the dot product, or inner product, is that it is 0 for orthogonal vectors:

- $$ \boldsymbol{e_{1}} \perp \boldsymbol{e_{2}} \implies \boldsymbol{e_1} \cdot \boldsymbol{e_2} = 0$$

And that a vector dot with itself gives the square of the norm of the vector, since $$\cos 0 = 1$$:

- $$ \boldsymbol{a} \cdot \boldsymbol{a} = {||\boldsymbol{a}||}^2$$

One important thing to notice about Linear Algebra is how often you have to
step outside of ℝ^{2} to get work done. That is, operations frequently
have to take place outside ℝ^{2} or those operations give you results
outside of ℝ^{2}. For example, the dot product of two vector returns a
scalar, which is not a member of ℝ^{2}.

- $$ \boldsymbol{a} \cdot \boldsymbol{b} := ||\boldsymbol{a}|| ||\boldsymbol{b}|| \cos(\theta) $$

Similarly, to rotate vectors you have to create matrices, which don't exist in
ℝ^{2}, and apply them to vectors through matrix multiplication.

One final example is the cross-product, which takes two vectors and operates
on them to produce a vector that is orthogonal to the original two vectors,
but if you are in ℝ^{2} it doesn't exist, you have to then view that
cross-product vector as existing in ℝ^{3}, which the original ℝ^{2}
is embedded in.

All of this stands in stark contrast to 𝔾^{2}, where these operations
take place in 𝔾^{2}, in fact, many of the constructs we use in Linear
Algebra, such as rotations, exist as elements of 𝔾^{2}, and applying
those operations is just a matter of taking the geometric product of those
objects. Not only is 𝔾^{2} closed under many of these operations, but
the operations exist as elements in 𝔾^{2}.

### Geometric Algebra

The Geometric Algebra of 𝔾^{2} builds upon ℝ^{2}, extending it
by adding multiplication, i.e. a geometric product. Before we get to the
geometric product we need to first quickly learn about the exterior product.

#### Exterior Product

The exterior product operates on two vectors and is written as:

- $$\boldsymbol{a} \wedge \boldsymbol{b}$$

The exterior product represents the oriented area defined by the two vectors, or more precisely is represents an oriented area in the plane defined by those vectors, also known as a bivector. There are two important aspects of this, the first is that the exact shape doesn't matter. For example, the bivectors represented below are equal because they have the same orientation (counter-clockwise) and the same area (3).

- $$ (1, 0) \wedge (0, 3) = (3, 0) \wedge (0, 1)$$

The second important factor is that the exterior product is anticommutative, that is, if you reverse the order of the vectors involved then the sign of the exterior product changes.

- $$\boldsymbol{a} \wedge \boldsymbol{b} = - \boldsymbol{b} \wedge \boldsymbol{a}$$

Using two of the vectors above, note that the order that they are used in the exterior product will make the bivectors either clockwise or counter-clockwise.

The properties of the exterior product are:

$$(\boldsymbol{a} \wedge \boldsymbol{b}) \wedge \boldsymbol{c} = \boldsymbol{a} \wedge (\boldsymbol{b} \wedge \boldsymbol{c})$$ | Associative | (1) |

$$c(\boldsymbol{a} \wedge \boldsymbol{b}) = c\boldsymbol{a} \wedge \boldsymbol{b} = \boldsymbol{a} \wedge c\boldsymbol{b}$$ | Scalar Associativity | (2) |

$$\boldsymbol{a} \wedge (\boldsymbol{b} + \boldsymbol{c}) = \boldsymbol{a} \wedge \boldsymbol{b} + \boldsymbol{a} \wedge \boldsymbol{c}$$ | Left Distributive | (3) |

$$(\boldsymbol{a} + \boldsymbol{b}) \wedge \boldsymbol{c} = \boldsymbol{a} \wedge \boldsymbol{c} + \boldsymbol{b} \wedge \boldsymbol{c}$$ | Right Distributive | (4) |

$$\boldsymbol{a} \wedge \boldsymbol{b} = -\boldsymbol{b} \wedge \boldsymbol{a}$$ | Anti-symmetric | (5) |

$$\boldsymbol{a} \parallel \boldsymbol{b} \Rightarrow \boldsymbol{a} \wedge \boldsymbol{b} = 0$$ | Zero for Parallel Vectors. | (6) |

In what is going to become a recurring theme, let's look at what this means in terms of basis vectors. Since any vector can be written as a linear combination of basis vectors we get:

- $$ \begin{align*} \boldsymbol{a} &= a_1 \boldsymbol{e_{1}} + a_2 \boldsymbol{e_{2}} \\ \boldsymbol{b} &= b_1 \boldsymbol{e_{1}} + b_2 \boldsymbol{e_{2}} \end{align*} $$

If we take their exterior product we get:

- $$ \begin{align*} \boldsymbol{a} \wedge \boldsymbol {b} &= (a_1 \boldsymbol{e_{1}} + a_2 \boldsymbol{e_{2}}) \wedge (b_1 \boldsymbol{e_{1}} + b_2 \boldsymbol{e_{2}}) \\ &= a_1 b_1 \boldsymbol{e_{1}} \wedge \boldsymbol{e_{1}} + a_1 b_2 \boldsymbol{e_{1}} \wedge \boldsymbol{e_{2}} + a_2 b_1 \boldsymbol{e_{2}} \wedge \boldsymbol{e_{1}} + a_2 b_2 \boldsymbol{e_{2}} \wedge \boldsymbol{e_{2}} & \text{via 1} \\ &= 0 + a_1 b_2 \boldsymbol{e_{1}} \wedge \boldsymbol{e_{2}} + a_2 b_1 \boldsymbol{e_{2}} \wedge \boldsymbol{e_{1}} + 0 & \text{via 6} \\ &= a_1 b_2 \boldsymbol{e_{1}} \wedge \boldsymbol{e_{2}} - a_2 b_1 \boldsymbol{e_{1}} \wedge \boldsymbol{e_{2}} & \text{via 5} \\ &= ( a_1 b_2 - a_2 b_1 )\boldsymbol{e_{1}} \wedge \boldsymbol{e_{2}} & \text{via 2} \end{align*} $$

So the exterior product of any two vectors can expressed as just a scalar
mulitple of **e _{1}^e_{2}**

#### Geometric Product

Now that we know about the exterior product, we can define the geometric product, which is just the sum of the inner product and the exterior product:

- $$\boldsymbol{a} \boldsymbol{b} := \boldsymbol{a} \cdot \boldsymbol{b} +\boldsymbol{a} \wedge \boldsymbol{b}$$

Using just the above definition you can show that the geometric product has the following properties:

$$(\boldsymbol{a} \boldsymbol{b}) \boldsymbol{c} = \boldsymbol{a} (\boldsymbol{b} \boldsymbol{c})$$ | Associative | (1) |

$$c(\boldsymbol{a} \boldsymbol{b}) = c\boldsymbol{a} \boldsymbol{b} = \boldsymbol{a} c \boldsymbol{b}$$ | Scalar Associativity | (2) |

$$\boldsymbol{a} (\boldsymbol{b} + \boldsymbol{c}) = \boldsymbol{a} \boldsymbol{b} + \boldsymbol{a} \boldsymbol{c}$$ | Left Distributive | (3) |

$$(\boldsymbol{a} + \boldsymbol{b}) \boldsymbol{c} = \boldsymbol{a} \boldsymbol{c} + \boldsymbol{b} \boldsymbol{c}$$ | Right Distributive | (4) |

$$\boldsymbol{a} \parallel \boldsymbol{a} \Rightarrow \boldsymbol{a} \boldsymbol{a} = \boldsymbol{a} \cdot \boldsymbol{a} = ||\boldsymbol{a}||^2 $$ | Norm | (5) |

$$\boldsymbol{a} \boldsymbol{b} \neq \boldsymbol{b} \boldsymbol{a} $$ | Non-Commutative, except in some cases. | (6) |

$$\boldsymbol{a} \neq 0 \Rightarrow \boldsymbol{a} (\frac{1}{||\boldsymbol{a}||^2} \boldsymbol{a}) = 1$$ | Vector Inverses | (7) |

$$\boldsymbol{a} \perp \boldsymbol{b} \Rightarrow \boldsymbol{a} \boldsymbol{b} = \boldsymbol{a} \wedge \boldsymbol{b}$$ | Orthogonal vector multiplication. | (8) |

With the geometric product as defined above, and vector addition, our
Geometric Algebra 𝔾^{2} forms a
unital associative algebra
with an orthonormal basis:

- $$ 1, \boldsymbol{e_1}, \boldsymbol{e_2}, \boldsymbol{e_{1} e_{2}} $$

We can work out a multiplication table for the basis elements, with the observation that if two elements are orthogonal then their dot product is zero, so that implies that the geometric product reduces to the exterior product between orthogonal vectors, which is anti-symmetric. So that implies for each of our basis vectors:

- $$ \boldsymbol{e_1} \boldsymbol{e_2} = \boldsymbol{e_1} \wedge \boldsymbol{e_2} $$

And that implies, by the anti-symmetry of the exterior product:

- $$ \boldsymbol{e_1} \boldsymbol{e_2} = - \boldsymbol{e_2} \boldsymbol{e_1} $$

And the geometric product of any basis element with itself, because they are parallel means the exterior product is zero, so:

- $$ \boldsymbol{e_1} \boldsymbol{e_1} = \boldsymbol{e_1} \cdot \boldsymbol{e_1} = ||\boldsymbol{e_1}||^2 = 1 $$

Note that we'll end up writing a lot of equations with basis vectors
multiplied together, so it's useful to have a shorthand, i.e.
**e _{12}** will be used as a short-hand for

**e**.

_{1}e_{2}We can now complete a multiplication table for the geometric product of all the basis elements:

\(1\) | \(e_1\) | \(e_2\) | \(e_{12}\) | |
---|---|---|---|---|

\(1\) | \(1\) | \(e_1\) | \(e_2\) | \(e_{12}\) |

\(e_1\) | \(e_1\) | \(1\) | \(e_{12}\) | \(e_2\) |

\(e_2\) | \(e_2\) | \(-e_{12}\) | \(1\) | \(-e_1\) |

\(e_{12}\) | \(e_{12}\) | \(-e_2\) | \(e_1\) | \(-1\) |

Now that we know what elements of 𝔾^{2} look like and how to
manipulate them, it's now time to put them to work.

### Applying Geometric Algebra

#### Multiplying Vectors

Let's start by multiplying two vectors:

- $$ \begin{align*} \boldsymbol{a} &= a_1 \boldsymbol{e_{1}} + a_2 \boldsymbol{e_{2}} \\ \boldsymbol{b} &= b_1 \boldsymbol{e_{1}} + b_2 \boldsymbol{e_{2}} \end{align*} $$

Under the geometric product we get:

- $$ \begin{align*} \boldsymbol{a} \boldsymbol {b} &= (a_1 \boldsymbol{e_{1}} + a_2 \boldsymbol{e_{2}}) (b_1 \boldsymbol{e_{1}} + b_2 \boldsymbol{e_{2}}) \\ &= a_1 b_1 \boldsymbol{e_{1}} \boldsymbol{e_{1}} + a_1 b_2 \boldsymbol{e_{1}} \boldsymbol{e_{2}} + a_2 b_1 \boldsymbol{e_{2}} \boldsymbol{e_{1}} + a_2 b_2 \boldsymbol{e_{2}} \boldsymbol{e_{2}} \\ &= a_1 b_1 + a_1 b_2 \boldsymbol{e_{1}} \boldsymbol{e_{2}} + a_2 b_1 \boldsymbol{e_{2}} \boldsymbol{e_{1}} + a_2 b_2 \\ &= a_1 b_1 + a_2 b_2 + a_1 b_2 \boldsymbol{e_{12}} - a_2 b_1 \boldsymbol{e_{12}} \\ &= (a_1 b_1 + a_2 b_2) + (a_1 b_2 - a_2 b_1) \boldsymbol{e_{12}} \end{align*} $$

We can see that from the product of two vectors we get a scalar and a bivector.

What if we take a scalar and a bivector and multiply it by a vector? Note that below we are using a capital letter for our scalar plus bivector.

- $$ \begin{align*} \boldsymbol{a} &= a_1 \boldsymbol{e_{1}} + a_2 \boldsymbol{e_{2}} \\ \boldsymbol{B} &= B_1 + B_2 \boldsymbol{e_{12}} \end{align*} $$

- $$ \begin{align*} \boldsymbol{a} \boldsymbol {B} &= (a_1 \boldsymbol{e_{1}} + a_2 \boldsymbol{e_{2}}) (B_1 + B_2 \boldsymbol{e_{12}}) \\ &= a_1 B_1 \boldsymbol{e_{1}} + a_1 B_2 \boldsymbol{e_{1}} \boldsymbol{e_{12}} + a_2 B_1 \boldsymbol{e_{2}} + a_2 B_2 \boldsymbol{e_{2}} \boldsymbol{e_{12}} \\ &= a_1 B_1 \boldsymbol{e_{1}} + a_2 B_1 \boldsymbol{e_{2}} + a_1 B_2 \boldsymbol{e_{1}} \boldsymbol{e_{12}} + a_2 B_2 \boldsymbol{e_{2}} \boldsymbol{e_{12}} \\ &= a_1 B_1 \boldsymbol{e_{1}} + a_2 B_1 \boldsymbol{e_{2}} + a_1 B_2 \boldsymbol{e_{2}} - a_2 B_2 \boldsymbol{e_{1}} \\ &= ( a_1 B_1 - a_2 B_2 )\boldsymbol{e_{1}} + ( a_2 B_1 + a_1 B_2 ) \boldsymbol{e_{2}} \\ \end{align*} $$

That product gives us back a vector, so **B** is an element of 𝔾^{2}
that operates on vectors through the geometric product to give us another
vector.

#### Rotors

A special case of **B** is called a Rotor. This Rotor is an element of
𝔾^{2} that is just a restatement of
Euler's formula
in 𝔾^{2}.

First, for reasons that will become clearer later, we will begin to abbreviate
**e _{12}** as

**I**. Our Rotor is then defined as:

- $$ e^{\theta \boldsymbol{I}} := \cos(\theta) + \sin(\theta)\boldsymbol{I}$$

If you multiply any vector by this Rotor on the right it will rotate that
vector θ degrees in the direction from **e _{1}** to

**e**. If you multiply that same vector on the left by this Rotor it will be rotated θ degrees in the opposite direction.

_{2}
For example, here is a dynamic illustration of the Rotor in action, In this
case, we are multiplying **e _{1}** by e

^{ωtI}, where t is time, and ω is the rate, in radians per second, that the vector undergoes rotation. In this example we set ω = 1, so the vector should complete a full circle every 2π seconds.

- $$\boldsymbol{v} = \boldsymbol{e_1} e^{t \boldsymbol{I}}$$

Caveat: Rotors only work like this in ℝ^{2}, in ℝ^{3} and
above the formulation changes, so be aware of that.

Using geometric algebra makes it easy to read off this formula and determine
what is going to happen, i.e. the **e _{1}** vector is going to be
operated on via geometric product and the result will be another vector that
is rotated

*ω t*radians in a counter-clockwise direction.

Since our Rotator is a member of 𝔾^{2} it can be combined with other
operations. For example, we could start with a vector *p* at an initial
position and then perturb it by adding it to another vector that is multiplied
by our Rotor. In this case we set ω = 2.

- $$\boldsymbol{v} = \boldsymbol{p} + 0.5 \boldsymbol{e_1} e^{2 t \boldsymbol{I}}$$

We can take that one step further and rotate the whole thing around the
origin, where we set ω_{1} = 2.9 and ω_{2} = 1.

- $$\boldsymbol{v} = (\boldsymbol{p} + 0.5 \boldsymbol{e_1} e^{\omega_1 t \boldsymbol{I}})e^{\omega_2 t \boldsymbol{I}}$$

That might be easier to follow if instead of drawing the vector we draw the trail of points where the vector has been.

#### Double Angle Formula

Some of the power of Geometric Algebra comes from being able to go back and forth between looking at a problem geometrically and looking at it algrebraically. For example, it is easy to reason that rotating a vector θ degrees twice is the same as rotating that same vector 2 θ degrees. We can write that out as an algebraic expression:

- $$ e^{2 \theta \boldsymbol{I}} = e^{\theta \boldsymbol{I}} e^{\theta \boldsymbol{I}} $$

If we expand both sides of the equations above using the definition of
*e* we get:

- $$ \begin{align*} \cos 2 \theta + \sin 2 \theta \boldsymbol{I} &= (\cos \theta + \sin \theta \boldsymbol{I}) (\cos \theta + \sin \theta \boldsymbol{I}) \\ &= \cos^2 \theta + 2 \cos \theta \sin \theta \boldsymbol{I} + \sin^2 \theta \boldsymbol{I}^2 \\ &= \cos^2 \theta + 2 \cos \theta \sin \theta \boldsymbol{I} - \sin^2 \theta \\ &= \cos^2 \theta - \sin^2 \theta + 2 \cos \theta \sin \theta \boldsymbol{I} \end{align*} $$

Comparing the coefficients on the left hand side of the equation to that on the right hand side we find we have derived the Double Angle Formulas:

- $$ \cos 2 \theta = \cos^2 \theta - \sin^2 \theta $$
- $$ \sin 2 \theta = 2 \cos \theta \sin \theta $$

You could start with the same geometric reasoning about any two angles, α and β, and use the same derivation to get the general Angle sum identities. The power here is the ability to move back and forth between algebraic and geometric reasoning quickly and easily.

#### Complex Numbers

From our definition of our Rotator, if we set ω to 90 degrees then since
*cos* becomes 0 we are left with only **I**, which is a 90 degree
Rotator. But if we apply a 90 degree Rotator twice we should get a 180 degree
Rotator:

- $$ \begin{align*} \boldsymbol{I} \boldsymbol{I} &= \boldsymbol{e_{12}} \boldsymbol{e_{12}} \\ &= \boldsymbol{e_1} \boldsymbol{e_2} \boldsymbol{e_1} \boldsymbol{e_2} \\ &= - \boldsymbol{e_1} \boldsymbol{e_1} \boldsymbol{e_2} \boldsymbol{e_2} \\ &= - 1 \boldsymbol{e_2} \boldsymbol{e_2} \\ &= - 1 \\ \end{align*} $$

And -1 is exactly what we would expect, since that's what you multiply a
vector by to rotate it 180 degrees. But what we also have is a quantity in
𝔾^{2} that when squared is equal to -1. This should remind you of
*i* in the complex numbers ℂ, but without the need to take the square
root of a negative number, or invoke anything imaginary. In fact the subset of
all linear combinations of **{1, I}** is closed under the geometric product
and is isomorphic to ℂ.

#### Characterizing B

Now that we have learned about Rotors, let's apply that knowledge to characterize elements of the form:

- $$ \boldsymbol{B} = B_1 + B_2 \boldsymbol{e_{12}} $$

First, let's look at the relationship between any two non-zero vectors.

We can reason out geometrically that given **b** we can get **a** from
it by first scaling **b** to have a norm of 1, then rotating it to have the
same direction as **a**, and then finally scaling that unit vector to have
the same length as **a**. Now write that out algrebraically, where θ
is the angle between the two vectors.

- $$ \begin{align*} \boldsymbol{a} &= ||\boldsymbol{a}|| e^{\theta \boldsymbol{I}} \frac{1}{||\boldsymbol{b}||} \boldsymbol{b} \\ &= \frac{||\boldsymbol{a}||}{||\boldsymbol{b}||} e^{\theta \boldsymbol{I}} \boldsymbol{b} \end{align*} $$

If we look at any product of two non-zero vectors, **ab**, we know we get
an operator that, under the geometric product, takes vectors and returns new
vectors. If we substitute our derivation of how to get **a** from **b**,
then we get:

- $$ \begin{align*} \boldsymbol{ab} &= \frac{||\boldsymbol{a}||}{||\boldsymbol{b}||} e^{\theta \boldsymbol{I}} \boldsymbol{b} \boldsymbol{b} \\ &= \frac{||\boldsymbol{a}||}{||\boldsymbol{b}||} e^{\theta \boldsymbol{I}} ||\boldsymbol{b}||^2 \\ &= ||\boldsymbol{a}|| ||\boldsymbol{b}|| e^{\theta \boldsymbol{I}} \end{align*} $$

So every such operator **ab** is actually just a rotation and a dilation.
We can see this in action if we have the operator **ab** and apply it to
vector **c** to get vector **d**. The animation will perturb vector
**b** to show how that affects vector **d**.

- $$ \boldsymbol{d} = \boldsymbol{ab}\boldsymbol{c} $$

Our generalized form for the geometric product of two vectors is:

- $$ \boldsymbol{B} = B_1 + B_2 \boldsymbol{e_{12}} $$

We can use what we've learned so far to break that apart into its scalar and Rotor components:

- $$ \boldsymbol{B} = k e^{\theta \boldsymbol{I}} $$

Start by applying **B** to a unit basis element, which we know has a norm
of 1, which gives us a new vector **v**.

- $$ \begin{align*} \boldsymbol{v} &= \boldsymbol{B}\boldsymbol{e_1} \\ &= k e^{\theta \boldsymbol{I}} \boldsymbol{e_1} \end{align*} $$

We can see from the last equation that **v** has a norm of k, and now that
we know k, we can divide **B** by k to get our Rotor.

- $$ \begin{align*} k &= ||\boldsymbol{B}\boldsymbol{e_1}|| = \sqrt{B_{1}^{2} + B_{2}^{2}} \\ e^{\theta \boldsymbol{I}} &= \frac{1}{k} \boldsymbol{B} \end{align*} $$

#### Ratios

While applying the operator **ab** above did show some of the behavior, it
may be useful to start over, this time building our operator from a ratio,
i.e. if we have two vectors **a** and **b**, and given a third vector
**c**, we'd like to calculate the vector **d** so that they have the
same ratio, i.e.

- $$ \boldsymbol{d} / \boldsymbol{c} = \boldsymbol{b} / \boldsymbol{a} $$

The geometric product isn't commutative, so we have to choose a side to do the division on, so we will write this as:

- $$ \boldsymbol{d}\boldsymbol{c^{-1}} = \boldsymbol{b}\boldsymbol{a^{-1}} $$

But that's just a simple algrebraic equation we can solve by multiplying both
sides by **c**.

- $$ \begin{align*} \boldsymbol{d}\boldsymbol{c^{-1}} &= \boldsymbol{b}\boldsymbol{a^{-1}} \\ \boldsymbol{d}\boldsymbol{c^{-1}}\boldsymbol{c} &= \boldsymbol{b}\boldsymbol{a^{-1}}\boldsymbol{c} \\ \boldsymbol{d} &= \boldsymbol{b}\boldsymbol{a^{-1}}\boldsymbol{c} \end{align*} $$

The operator **ba ^{-1}** should preserve the angle between

**a**and

**b**, and also dilate

**d**proportionally to the norms of

**a**and

**b**. The following animation shows that relationship, also perturbing

**b**to show the affect on

**d**.

#### Conjugates and Inverses

Let's see what the difference between **ab** and **ba** is. First let's
multiply out in terms of basis vectors:

- $$ \begin{align*} \boldsymbol{a} \boldsymbol {b} &= (a_1 \boldsymbol{e_{1}} + a_2 \boldsymbol{e_{2}}) (b_1 \boldsymbol{e_{1}} + b_2 \boldsymbol{e_{2}}) \\ &= a_1 b_1 \boldsymbol{e_{1}} \boldsymbol{e_{1}} + a_1 b_2 \boldsymbol{e_{1}} \boldsymbol{e_{2}} + a_2 b_1 \boldsymbol{e_{2}} \boldsymbol{e_{1}} + a_2 b_2 \boldsymbol{e_{2}} \boldsymbol{e_{2}} \\ &= a_1 b_1 + a_1 b_2 \boldsymbol{e_{1}} \boldsymbol{e_{2}} + a_2 b_1 \boldsymbol{e_{2}} \boldsymbol{e_{1}} + a_2 b_2 \\ &= a_1 b_1 + a_2 b_2 + a_1 b_2 \boldsymbol{e_{12}} - a_2 b_1 \boldsymbol{e_{12}} \\ &= (a_1 b_1 + a_2 b_2) + (a_1 b_2 - a_2 b_1) \boldsymbol{I} \end{align*} $$

If we swap **a** and **b** we get:

- $$ \begin{align*} \boldsymbol{b} \boldsymbol {a} &= (b_1 a_1 + b_2 a_2) + (b_1 a_2 - b_2 a_1) \boldsymbol{I} \\ &= (a_1 b_1 + a_2 b_2) + (b_1 a_2 - b_2 a_1) \boldsymbol{I} \\ &= (a_1 b_1 + a_2 b_2) + (a_2 b_1 - a_1 b_2) \boldsymbol{I} \\ &= (a_1 b_1 + a_2 b_2) - (a_1 b_2 - a_2 b_1) \boldsymbol{I} \\ \end{align*} $$

In that last step we just factor out a -1 from the coefficient of **I**. If
we substitute:

- $$ \begin{align*} B_1 &= a_1 b_1 + a_2 b_2 \\ B_2 &= a_1 b_2 - a_2 b_1 \end{align*} $$

Then we get:

- $$ \begin{align*} \boldsymbol{a} \boldsymbol {b} &= B_1 + B_2 \boldsymbol{I} \\ \boldsymbol{b} \boldsymbol {a} &= B_1 - B_2 \boldsymbol{I} \end{align*} $$

So if we reverse the order of the geometric product of our vectors we end up with the equivalent of the complex conjugate.

We will note the reverse of the product of two vectors with the dagger. While
this maps to the conjugate in 𝔾^{2}, reversing a product of multiple
vectors will be more important and powerful in 𝔾^{3}.

- $$ \begin{align*} \boldsymbol{B} &= \boldsymbol{a} \boldsymbol{b} &= B_1 + B_2 \boldsymbol{I} \\ \boldsymbol{B}^{\dagger} &= \boldsymbol{b} \boldsymbol{a} &= B_1 - B_2 \boldsymbol{I} \end{align*} $$

If we multiply them together we find:

- $$ \begin{align*} \boldsymbol{B} \boldsymbol{B}^{\dagger} &= \boldsymbol{abba} \\ &= \boldsymbol{a} ||\boldsymbol{b}||^2 \boldsymbol{a} \\ &= ||\boldsymbol{b}||^2 \boldsymbol{aa} \\ &= ||\boldsymbol{b}||^2 ||\boldsymbol{a}||^2 \\ &= ||\boldsymbol{a}||^2 ||\boldsymbol{b}||^2 \end{align*} $$

Their product just ends up being a scalar, so if divide by that scalar value we should get:

- $$ \begin{align*} \boldsymbol{B} \frac{\boldsymbol{B}^{\dagger}}{||\boldsymbol{a}||^2 ||\boldsymbol{b}||^2} &= \frac{||\boldsymbol{a}||^2 ||\boldsymbol{b}||^2}{ ||\boldsymbol{a}||^2 ||\boldsymbol{b}||^2} \\ &= 1 \end{align*} $$

Which means we've found the multiplicative inverse of **B**.

- $$ \boldsymbol{B}^{-1} = \frac{\boldsymbol{B}^{\dagger}}{\boldsymbol{B}\boldsymbol{B^{\dagger}}} $$

Normally geometric products aren't commutative, but in this case we can see that we get the same result when we reverse the order of B and B dagger:

- $$ \begin{align*} \boldsymbol{B}^{\dagger}\boldsymbol{B} &= \boldsymbol{baab} \\ &= \boldsymbol{b} ||\boldsymbol{a}||^2 \boldsymbol{b} \\ &= ||\boldsymbol{a}||^2 \boldsymbol{bb} \\ &= ||\boldsymbol{a}||^2 ||\boldsymbol{b}||^2 \end{align*} $$

So our inverse will work whether applied on the left or on the right.

Let's see how that inverse operates by applying it to our previous ratio
example. This time we'll not only apply the **ba ^{-1}** operator,
but also apply it's inverse to

**c**to see how it compares.

- $$ \begin{align*} \boldsymbol{d} &= \boldsymbol{b}\boldsymbol{a^{-1}}\boldsymbol{c} \\ \boldsymbol{d'} &= (\boldsymbol{b}\boldsymbol{a^{-1}})^{-1}\boldsymbol{c} \end{align*} $$

Note that starting from conjugates isn't the only way to construct such an inverse, we could, for example, note that because each non-zero vector has a multiplicative inverse, we can come to the same conclusion:

- $$ \begin{align*} 1 &= \boldsymbol{a} (\frac{\boldsymbol{a}}{||\boldsymbol{a}||^2}) \\ &= \boldsymbol{a} 1 (\frac{\boldsymbol{a}}{||\boldsymbol{a}||^2}) \\ &= \boldsymbol{a} (\boldsymbol{b} (\frac{\boldsymbol{b}}{||\boldsymbol{b}||^2}) (\frac{\boldsymbol{a}}{||\boldsymbol{a}||^2}) \\ &= \boldsymbol{ab} (\frac{\boldsymbol{ba}}{||\boldsymbol{a}||^2||\boldsymbol{b}||^2}) \end{align*} $$

### Further Reading

There are other introductions to GA around the web, some of the ones I've found helpful are: