Geometric Algebra equivalants for Pauli Matrices.
Posted by peeterjoot on July 12, 2009
Having learned Geometric (Clifford) Algebra from (), (), (), and other sources before studying any quantum mechanics, trying to work with (and talk to people familiar with) the Pauli and Dirac matrix notation as used in traditional quantum mechanics becomes difficult.
The aim of these notes is to work through equivalents to many Clifford algebra expressions entirely in commutator and anticommutator notations. This will show the mapping between the (generalized) dot product and the wedge product, and also show how the different grade elements of the Clifford algebra manifest in their matrix forms.
The matrices in question are:
These all have positive square as do the traditional Euclidean unit vectors , and so can be used algebraically as a vector basis for . So any vector that we can write in coordinates
we can equivalently write (an isomorphism) in terms of the Pauli matrix’s
() introduces the Pauli vector as a mechanism for mapping between a vector basis and this matrix basis
This is a curious looking construct with products of matrices and vectors. Obviously these are not the usual column vector representations. This Pauli vector is thus really a notational construct. If one takes the dot product of a vector expressed using the standard orthonormal Euclidean basis basis, and then takes the dot product with the Pauli matrix in a mechanical fashion
one arrives at the matrix representation of the vector in the Pauli basis . Does this construct have any value? That I don’t know, but for the rest of these notes the coordinate representation as in equation (4) will be used directly.
It was stated that the Pauli matrices have unit square. Direct calculation of this is straightforward, and confirms the assertion
Note that unlike the vector (Clifford) square the identity matrix and not a scalar.
If we are to operate with Pauli matrices how do we express our most basic vector operation, the length?
Examining a vector lying along one direction, say, we expect
Lets contrast this to the Pauli square for the same vector
The wiki article mentions trace, but no application for it. Since , an observable application is that the trace operator provides a mechanism to convert a diagonal matrix to a scalar. In particular for this scaled unit vector we have
It is plausible to guess that the squared length will be related to the matrix square in the general case as well
Let’s see if this works by performing the coordinate expansion
A split into equal and different indexes thus leaves
As an algebra that is isomorphic to the Clifford Algebra it is expected that the matrices anticommute for . Multiplying these out verifies this
Thus in (5) the sum over the indexes is zero.
Having computed this, our vector square leaves us with the vector length multiplied by the identity matrix
Invoking the trace operator will therefore extract just the scalar length desired
Aside: Summarizing the multiplication table.
It is worth pointing out that the multiplication table above used to confirm the antisymmetric behavior of the Pauli basis can be summarized as
Having found the expression for the length of a vector in the Pauli basis, the next logical desirable identity is the dot product. One can guess that this will be the trace of a scaled symmetric product, but can motivate this without guessing in the usual fashion, by calculating the length of an orthonormal sum.
Consider first the length of a general vector sum. To calculate this we first wish to calculate the matrix square of this sum.
If these vectors are perpendicular this equals . Thus orthonormality implies that
We have already observed this by direct calculation for the Pauli matrices themselves. Now, this is not any different than the usual description of perpendicularity in a Clifford Algebra, and it is notable that there are not any references to matrices in this argument. One only requires that a well defined vector product exists, where the squared vector has a length interpretation.
One matrix dependent observation that can be made is that since the left hand side and the , and terms are all diagonal, this symmetric sum must also be diagonal. Additionally, for the length of this vector sum we then have
For correspondence with the Euclidean dot product of two vectors we must then have
Here has been used to denote this scalar product (ie: a plain old number), since will be used later for a matrix dot product (this times the identity matrix) which is more natural in many ways for this Pauli algebra.
Observe the symmetric product that is found embedded in this scalar selection operation. In physics this is known as the anticommutator, where the commutator is the antisymmetric sum. In the physics notation the anticommutator (symmetric sum) is
So this scalar selection can be written
Similarly, the commutator, an antisymmetric product, is denoted:
A close relationship between this commutator and the wedge product of Clifford Algebra is expected.
Symmetric and antisymmetric split.
As with the Clifford product, the symmetric and antisymmetric split of a vector product is a useful concept. This can be used to write the product of two Pauli basis vectors in terms of the anticommutator and commutator products
These follows from the definition of the anticommutator (10) and commutator (12) products above, and are the equivalents of the Clifford symmetric and antisymmetric split into dot and wedge products
Where the dot and wedge products are respectively
Note the factor of two differences in the two algebraic notations. In particular very handy Clifford vector product reversal formula
has no factor of two in its Pauli anticommutator equivalent
It has been observed that the square of a vector is diagonal in this matrix representation, and can therefore be inverted for any non-zero vector
So it is therefore quite justifiable to define
This allows for the construction of a dual sided vector inverse operation.
This inverse is a scaled version of the vector itself.
The diagonality of the squared matrix or the inverse of that allows for commutation with x. This diagonality plays the same role as the scalar in a regular Clifford square. In either case the square can commute with the vector, and that commutation allows the inverse to have both left and right sided action.
Note that like the Clifford vector inverse when the vector is multiplied with this inverse, the product resides outside of the proper Pauli basis since the identity matrix is required.
Given a vector in the Pauli basis, we can extract the coordinates using the scalar product
But do not need to convert to strict scalar form if we are multiplying by a Pauli matrix. So in anticommutator notation this takes the form
Projection and rejection.
The usual Clifford algebra trick for projective and rejective split maps naturally to matrix form. Write
Since is diagonal, this first term is proportional to , and thus lines in the direction of itself. The second term is perpendicular to .
These are in fact the projection of in the direction of and rejection of from the direction of respectively.
To complete the verification of this note that the perpendicularity of the term can be verified by taking dot products
Space of the vector product.
Expansion of the anticommutator and commutator in coordinate form shows that these entities lie in a different space than the vectors itself.
For real coordinate vectors in the Pauli basis, all the commutator values are imaginary multiples and thus not representable
Similarly, the anticommutator is diagonal, which also falls outside the Pauli vector basis:
These correspond to the Clifford dot product being scalar (grade zero), and the wedge defining a grade two space, where grade expresses the minimal degree that a product can be reduced to. By example a Clifford product of normal unit vectors such as
are grade one and four respectively. The proportionality constant will be dependent on metric of the underlying vector space and the number of permutations required to group terms in pairs of matching indexes.
Completely antisymmetrized product of three vectors.
In a Clifford algebra no imaginary number is required to express the antisymmetric (commutator) product. However, the bivector space can be enumerated using a dual basis defined by multiplication of the vector basis elements with the unit volume trivector. That is also the case here and gives a geometrical meaning to the imaginaries of the Pauli formulation.
How do we even write the unit volume element in Pauli notation? This would be
So we have
Similar expansion of , or shows that we must also have
Until now the differences in notation between the anticommutator/commutator and the dot/wedge product of the Pauli algebra and Clifford algebra respectively have only differed by factors of two, which isn’t much of a big deal. However, having to express the naturally associative wedge product operation in the non-associative looking notation of equation (20) is rather unpleasant seeming. Looking at an expression of the form gives no mnemonic hint of the underlying associativity, and actually seems obfuscating. I suppose that one could get used to it though.
We expect to get a three by three determinant out of the trivector product. Let’s verify this by expanding this in Pauli notation for three general coordinate vectors
In particular, our unit volume element is
So one sees that the complex number in the Pauli algebra can logically be replaced by the unit pseudoscalar , and relations involving , like the commutator expansion of a vector product, is restored to the expected dual form of Clifford algebra
We’ve seen that multiplication by is a duality operation, which is expected since is the matrix equivalent of the unit pseudoscalar. Logically this means that for a vector , the product represents a plane quantity (torque, angular velocity/momentum, …). Similarly if is a plane object, then will have a vector interpretation.
In particular, for the antisymmetric (commutator) part of the vector product
a “vector” in the dual space spanned by is seen to be more naturally interpreted as a plane quantity (bivector in Clifford algebra).
As in Clifford algebra, we can write the cross product in terms of the antisymmetric product
With the factor of in the denominator here (like the exponential form of sine), it is interesting
to contrast this to the cross product in its trigonometric form
This shows we can make the curious identity
If one did not already know about the dual sides half angle rotation formulation of Clifford algebra, this is a hint about how one could potentially work towards that. We have the commutator (or wedge product) as a rotation operator that leaves the normal component of a vector untouched (commutes with the normal vector).
Complete algebraic space.
Pauli equivalents for all the elements in the Clifford algebra have now been
Summing these we have the mapping from Clifford basis to Pauli matrix as follows
Thus for any given sum of scalar, vector, bivector, and trivector elements we can completely express this in Pauli form as a general complex matrix.
Provided that one can also extract the coordinates for each of the grades involved, this also provides a complete Clifford algebra characterization of an arbitrary complex matrix.
Computationally this has some nice looking advantages. Given any canned complex matrix software, one should be able to easily cook up with little work a working Clifford calculator.
As for the coordinate extraction, part of the work can be done by taking real and imaginary components. Let an element of the general algebra be denoted
We therefore have
By inspection, symmetric and antisymmetric sums of the real and imaginary parts recovers the coordinates as follows
In terms of grade selection operations the decomposition by grade
Employing , and this can be made slightly more symmetrical, with Real operations selecting the vector coordinates and imaginary operations selecting the bivector coordinates.
Finally, returning to the Pauli algebra, this also provides the following split of the Pauli multivector matrix into its geometrically significant components ,
The reversal operation switches the order of the product of perpendicular vectors. This will change the sign of grade two and three terms in the Pauli algebra. Since is imaginary, conjugation does not have the desired effect, but Hermitian conjugation (conjugate transpose) does the trick.
Since the reverse operation can be written as Hermitian conjugation, one can also define the anticommutator and commutator in terms of reversion in a way that seems particularly natural for complex matrices. That is
Rotations take the normal Clifford, dual sided quaterionic form. A rotation about a unit normal will be
The Rotor commutes with any component of the vector x that is parallel to the normal (perpendicular to the plane), whereas it anticommutes with the components
in the plane. Writing the vector components perpendicular and parallel to the plane respectively as , the essence of the rotation action is this selective commutation or anti-commutation behavior
Here the exponential has the obvious meaning in terms of exponential series, so for this bivector case we have
The unit bivector can also be defined explicitly in terms of two vectors , and in the plane
Where the bivector length is defined in terms of the conjugate square (bivector times bivector reverse)
Examples to complete this subsection would make sense. As one of the most powerful and useful operations in the algebra, it is a shame in terms of completeness to skimp on this. However, except for some minor differences like substitution of the Hermitian conjugate operation for reversal, the use of the identity matrix in place of the scalar in the exponential expansion, the treatment is exactly the same as in the Clifford algebra.
Coordinate equations for grade selection were worked out above, but the observation that reversion and Hermitian conjugation are isomorphic operations can partially clean this up. In particular a Hermitian conjugate symmetrization and anti-symmetrization of the general matrix provides a nice split into quaternion and dual quaternion parts (say respectively). That is
Now, having done that, how to determine , , , and becomes the next question. Once that is done, the individual coordinates can be picked off easily enough. For the vector parts, a Fourier decomposition as in equation (18) will retrieve the desired coordinates.
The dual vector coordinates can be picked off easily enough taking dot products with the dual basis vectors
For the quaternion part the aim is to figure out how to isolate or subtract out the scalar part. This is the only tricky bit because the diagonal bits are all mixed up with the term which is also real, and diagonal. Consideration of the sum
shows that trace will recover the value , so we have
Next is isolation of the pseudoscalar part of the dual quaternion . As with the scalar part, consideration of the sum of the term and the term is required
So the trace of the dual quaternion provides the , leaving the bivector and pseudoscalar grade split
A final assembly of these results provides the following coordinate free grade selection operators
Generalized dot products.
Here the equivalent of the generalized Clifford bivector/vector dot product will be computed, as well as the associated distribution equation
To translate this write
Then with the identifications
From this we also get the strictly Pauli algebra identity
But the geometric meaning of this is unfortunately somewhat obfuscated by the notation.
Generalized dot and wedge product.
The fundamental definitions of dot and wedge products are in terms of grade
Use of the trace and Hermitian conjugate split grade selection operations above, we can calculate these for each of the four grades in the Pauli algebra.
There are three dot products consider, vector/vector, bivector/bivector, and trivector/trivector. In each case we want to compute
For vectors we have , since the Pauli basis is Hermitian, whereas for bivectors and trivectors we have . Therefore, in all cases where , and have equal grades we have
We have two dot products that produce vectors, bivector/vector, and trivector/bivector, and in each case we need to compute
For the bivector/vector dot product we have
For bivector , and vector our symmetric Hermitian sum in coordinates is
Any terms will vanish, leaving only the bivector terms, which are traceless. We therefore have
This result was borrowed without motivation from Clifford algebra in equation (33), and thus not satisfactory in terms of a logically derived sequence.
For a trivector dotted with bivector we have
This is also traceless, and the trivector/bivector dot product is therefore reduced to just
This is the duality relationship for bivectors. Multiplication by the unit pseudoscalar (or any multiple of it), produces a vector, the dual of the original bivector.
We have two products that produce a grade two term, the vector wedge product, and the vector/trivector dot product. For either case we must compute
For a vector , and trivector we need the antisymmetric Hermitian sum
This is a pure bivector, and thus traceless, leaving just
Again we have the duality relation, pseudoscalar multiplication with a vector produces a bivector, and is equivalent to the dot product of the two.
Now for the wedge product case, with vector , and we must compute
All the terms vanish, leaving a pure bivector which is traceless, so only the first term of (36) is relevant, and is in this case a commutator
There are two ways we can produce a grade three term in the algebra. One is a wedge of a vector with a bivector, and the other is the wedge product of three vectors. The triple wedge product is the grade three term of the product of the three
With a split of the and terms into symmetric and antisymmetric terms we have
The symmetric term is diagonal so it commutes (equivalent to scalar commutation with a vector in Clifford algebra), and this therefore vanishes. Writing , and noting that we therefore have
In terms of the original three vectors this is
Since this could have been expanded by grouping instead of we also have
Plane projection and rejection.
Projection of a vector onto a plane follows like the vector projection case. In the Pauli notation this is
Here the plane is a bivector, so if vectors , and are in the plane, the orientation and attitude can be represented
by the commutator
So we have
Of these the second term is our projection onto the plane, while the first is the normal component of the vector.
Velocity and momentum.
A decomposition of velocity into radial and perpendicular components should be straightforward in the Pauli algebra as it is in the Clifford algebra.
With a radially expressed position vector
velocity can be written by taking derivatives
or as above in the projection calculation with
By comparison we have
In assembled form we have
Here the commutator has been identified with the angular velocity bivector
Similarly, the linear and angular momentum split of a momentum vector is
and in vector form
Writing for the moment of inertia we have for our commutator
With the identification of the commutator with the angular momentum bivector we have the total momentum as
Acceleration and force.
Having computed velocity, and its radial split, the next logical thing to try is acceleration.
The acceleration will be
We need to compute to continue, which is
Putting things back together is a bit messy, but starting so gives
The anticommutator can be eliminated above using
Finally reassembly of the assembly is thus
The second term is the inwards facing radially directed acceleration, while the last is the rejective component of the acceleration.
It is usual to express this last term as the rate of change of angular momentum (torque). Because , we have
So, for constant mass, we can write the torque as
and finally have for the force
Although many of the GA references that can be found downplay the Pauli algebra as unnecessarily employing matrices as a basis, I believe this shows that there are some nice computational and logical niceties in the complete formulation of the Clifford algebra in this complex matrix formulation. If nothing else it takes some of the abstraction away, which is good for developing intuition. All of the generalized dot and wedge product relationships are easily derived showing specific examples of the general pattern for the dot and blade product equations which are sometimes supplied as definitions instead of consequences.
Also, the matrix concepts (if presented right which I likely haven’t done) should also be accessible to most anybody out of high school these days since both matrix algebra and complex numbers are covered as basics these days (at least that’s how I recall it from fifteen years back;)
Hopefully, having gone through the exercise of examining all the equivalent constructions will be useful in subsequent Quantum physics study to see how the matrix algebra that is used in that subject is tied to the classical geometrical vector constructions.
Expressions that were scary and mysterious looking like
are no longer so bad since some of the geometric meaning that backs this operator expression is now clear (this is a quantization of angular momentum in a specific plane, and encodes the plane orientation as well as the magnitude). Knowing that was an antisymmetric sum, but not realizing the connection between that and the wedge product previously made me wonder “where the hell did the i come from”?
This commutator equation is logically and geometrically a plane operation. It can therefore be expressed with a vector duality relationship employing the unit pseudoscalar . This is a good nice step towards taking some of the mystery out of the math behind the physics of the subject (which has enough intrinsic mystery without the mathematical language adding to it).
It is unfortunate that QM uses this matrix operator formulation and none of classical physics does. By the time one gets to QM learning an entirely new language is required despite the fact that there are many powerful applications of this algebra in the classical domain, not just for rotations which is recognized (in () for example where he uses the Pauli algebra to express his rotation quaternions.)
 C. Doran and A.N. Lasenby. Geometric algebra for physicists. Cambridge University Press New York, Cambridge, UK, 1st edition, 2003.
 D. Hestenes. New Foundations for Classical Mechanics. Kluwer Academic Publishers, 1999.
 L. Dorst, D. Fontijne, and S. Mann. Geometric Algebra for Computer Science. Morgan Kaufmann, San Francisco, 2007.
 Wikipedia. Pauli matrices — wikipedia, the free encyclopedia [online]. 2009. [Online; accessed 16-June-2009]. http://en.wikipedia.org/w/index.php?title=Pauli_matrices&oldid=29679677%0.
 H. Goldstein. Classical mechanics. Cambridge: Addison-Wesley Press, Inc, 1st edition, 1951.