# Peeter Joot's (OLD) Blog.

• 324,678

## Dot product of vector and bivector

Posted by peeterjoot on August 11, 2009

[Click here for a PDF of this post with nicer formatting]

Scott asked about the following vector bivector product identities

\begin{aligned}\mathbf{w} (\mathbf{u} \wedge \mathbf{v}) &= \mathbf{w} \cdot (\mathbf{u} \wedge \mathbf{v}) + \mathbf{w} \wedge \mathbf{u} \wedge \mathbf{v} \\ (\mathbf{u} \wedge \mathbf{v}) \mathbf{w} &= - \mathbf{w} \cdot (\mathbf{u} \wedge \mathbf{v}) + \mathbf{w} \wedge \mathbf{u} \wedge \mathbf{v} \end{aligned}

Specifically, he asked why the sign in the second identity is negative, and if this was a typo (i.e. this can be found in my GA notes compilation).

It probably would have been clearer to write

\begin{aligned}(\mathbf{u} \wedge \mathbf{v}) \mathbf{w} = (\mathbf{u} \wedge \mathbf{v}) \cdot \mathbf{w} + (\mathbf{u} \wedge \mathbf{v}) \wedge \mathbf{w} \end{aligned}

But this an equivalent statement, and not a correction. Let’s see why. The key to this is that while the dot product of vectors is symmetric, the dot product of a vector and other objects (like this bivector) may be antisymmetric.

The fundamental definitions of the generalized dot and wedge products is really based on grade selection. The product of a grade $r$ blade $A$ with a vector will have grades $r-1$ and $r+1$ components. Writing ${\left\langle{{A}}\right\rangle}_{{k}}$ for the grade $k$ part of a multivector $A$, and considering the product of $A$ with a vector $v$ these $r-1$ grade and $r+1$ grade parts can be labeled the dot and wedge product respectively, and we define

\begin{aligned}A \cdot \mathbf{v} &\equiv {\left\langle{{A \mathbf{v}}}\right\rangle}_{{r-1}} \\ A \wedge \mathbf{v} &\equiv {\left\langle{{A \mathbf{v}}}\right\rangle}_{{r+1}} \end{aligned} \quad\quad\quad(3)

It is also possible to show that symmetric and antisymmetric sums neatly split such a $A \mathbf{v}$ product into its grade $r-1$ and grade $r+1$ parts, and can express the dot and wedge products as

\begin{aligned}A \cdot \mathbf{v} &= \frac{1}{{2}} (A \mathbf{v} -(-1)^r \mathbf{v} A) = (-1)^{r+1} \mathbf{v} \cdot A \\ A \wedge \mathbf{v} &= \frac{1}{{2}} (A \mathbf{v} +(-1)^r \mathbf{v} A) = (-1)^r \mathbf{v} \wedge A \end{aligned} \quad\quad\quad(5)

Specifically, for the bivector vector dot product in the original identities we have

\begin{aligned}A \cdot \mathbf{v} &= - \mathbf{v} \cdot A \\ A \wedge \mathbf{v} &= \mathbf{v} \wedge A \end{aligned} \quad\quad\quad(7)

To remove some of the abstraction, and avoid the too general relationships above, we can illustrate this by example. Suppose we are working in three dimensional Euclidean space with orthonormal unit vectors $\mathbf{e}_1$, $\mathbf{e}_2$, and $\mathbf{e}_3$. For orthonormal vectors in a Euclidean space we have a unit square, such as $\mathbf{e}_1 \mathbf{e}_1 = 1$. We also have for any perpendicular vectors, such as $\mathbf{e}_1$, and $\mathbf{e}_2$, a change in sign on interchange $\mathbf{e}_1 \mathbf{e}_2 = -\mathbf{e}_2 \mathbf{e}_1$. Using the fundamental definition of the generalized dot product (5), as opposed to the derived antisymmetric identity of (5), the dot product of a bivector for the $x-y$ plane $\mathbf{e}_1 \mathbf{e}_2$ with $\mathbf{e}_1$, one of the unit vectors in the plane is

\begin{aligned}(\mathbf{e}_1 \mathbf{e}_2) \cdot \mathbf{e}_1&={\left\langle{{ \mathbf{e}_1 \mathbf{e}_2 \mathbf{e}_1 }}\right\rangle}_{1} \\ &=-{\left\langle{{ \mathbf{e}_1 \mathbf{e}_1 \mathbf{e}_2 }}\right\rangle}_{1} \\ &=-{\left\langle{{ \mathbf{e}_2 }}\right\rangle}_{1} \\ &=-\mathbf{e}_2 \end{aligned}

Similarly, dotting from the left we have

\begin{aligned}\mathbf{e}_1 \cdot (\mathbf{e}_1 \mathbf{e}_2) &={\left\langle{{ \mathbf{e}_1 \mathbf{e}_1 \mathbf{e}_2 }}\right\rangle}_{1} \\ &={\left\langle{{ \mathbf{e}_2 }}\right\rangle}_{1} \\ &=\mathbf{e}_2 \end{aligned}

Observe that the order of the dotting operation is in this case significant. This interchange of sign for the dot product of a vector with a bivector is a general property and even holds in more general metrics (such as the Minkowski space where we have $\pm 1$ for the square of the basis vectors).