# Peeter Joot's (OLD) Blog.

• ## Archives

 Adam C Scott on avoiding gdb signal noise… Ken on Scotiabank iTrade RESP …… Alan Ball on Oops. Fixing a drill hole in P… Peeter Joot's B… on Stokes theorem in Geometric… Exploring Stokes The… on Stokes theorem in Geometric…

• 317,406

## Linear transformations that retain two by two positive definiteness.

Posted by peeterjoot on October 4, 2009

# Motivation

Purely for fun, lets study the classes of linear transformations that retain the positive definiteness of a diagonal two by two quadratic form. Namely, the Hamiltonian

\begin{aligned}H = P^2 + Q^2\end{aligned} \quad\quad\quad(1)

under a change of variables that mixes position and momenta coordinates in phase space

\begin{aligned}\mathbf{z}' =\begin{bmatrix}p \\ q\end{bmatrix}=\begin{bmatrix}\alpha & \beta \\ a & b \\ \end{bmatrix}\begin{bmatrix}P \\ Q\end{bmatrix}= A \mathbf{z}\end{aligned} \quad\quad\quad(2)

We want the conditions on the matrix $A$ such that the quadratic form retains the diagonal nature

\begin{aligned}H = P^2 + Q^2 = p^2 + q^2\end{aligned} \quad\quad\quad(3)

which in matrix form is

\begin{aligned}H = \mathbf{z}^\text{T} \mathbf{z} = {\mathbf{z}'}^\text{T} \mathbf{z}' \end{aligned} \quad\quad\quad(4)

So the task is to solve for the constaints on the matrix elements for

\begin{aligned}I = A^\text{T} A = \begin{bmatrix}\alpha & a \\ \beta & b \\ \end{bmatrix}\begin{bmatrix}\alpha & \beta \\ a & b \\ \end{bmatrix}\end{aligned} \quad\quad\quad(5)

Strictly speaking we can also scale and retain positive definiteness, but that case is not of interest to me right now so I’ll use this term as described above.

# Guts

The expectation is that this will neccessarily include all rotations. Will there be any other allowable linear transformations? Written out in full we want the solutions of

\begin{aligned}\begin{bmatrix}1 & 0 \\ 0 & 1 \\ \end{bmatrix}=\begin{bmatrix}\alpha & a \\ \beta & b \\ \end{bmatrix}\begin{bmatrix}\alpha & \beta \\ a & b \\ \end{bmatrix}= \begin{bmatrix}\alpha^2 + a^2 & \alpha \beta + a b \\ \alpha \beta + a b & \beta^2 + b^2\end{bmatrix}\end{aligned} \quad\quad\quad(6)

Written out explicitly we have three distinct equations to reduce

\begin{aligned}1 = \alpha^2 + a^2 \end{aligned} \quad\quad\quad(7)

\begin{aligned}1 = \beta^2 + b^2 \end{aligned} \quad\quad\quad(8)

\begin{aligned}0 = \alpha \beta + a b \end{aligned} \quad\quad\quad(9)

Solving for $a$ in 9 we have

\begin{aligned}a &= -\frac{\alpha \beta}{b} \\ \implies \\ 1 &= \alpha^2 \left( 1 + \left(-\frac{\beta}{b} \right)^2 \right) \\ &= \frac{\alpha^2}{b^2} \left( b^2 + \beta^2 \right) \\ &= \frac{\alpha^2}{b^2} \end{aligned}

So, provided $b \ne 0)$, we have a first simplifying identity

\begin{aligned}\alpha^2 = b^2\end{aligned} \quad\quad\quad(10)

Written out to check, this reduces our system of equations

\begin{aligned}\begin{bmatrix}1 & 0 \\ 0 & 1 \\ \end{bmatrix}=\begin{bmatrix}\alpha & a \\ \beta & \pm \alpha \\ \end{bmatrix}\begin{bmatrix}\alpha & \beta \\ a & \pm \alpha \\ \end{bmatrix}= \begin{bmatrix}\alpha^2 + a^2 & \alpha \beta \pm a \alpha \\ \alpha \beta \pm a \alpha & \beta^2 + \alpha^2\end{bmatrix}\end{aligned} \quad\quad\quad(11)

so our equations are now

\begin{aligned}1 = \alpha^2 + a^2 \end{aligned} \quad\quad\quad(12)

\begin{aligned}1 = \beta^2 + \alpha^2\end{aligned} \quad\quad\quad(13)

\begin{aligned}0 = \alpha (\beta \pm a )\end{aligned} \quad\quad\quad(14)

There are two cases to distinguish here. The first is the more trivial $\alpha = 0$ case, for which we find $a^2 = \beta^2 = 1$. For the other case we have

\begin{aligned}\beta = \mp a\end{aligned} \quad\quad\quad(15)

Again, writing out in full to check, this reduces our system of equations

\begin{aligned}\begin{bmatrix}1 & 0 \\ 0 & 1 \\ \end{bmatrix}=\begin{bmatrix}\alpha & a \\ \mp a & \pm \alpha \\ \end{bmatrix}\begin{bmatrix}\alpha & \mp a \\ a & \pm \alpha \\ \end{bmatrix}= \begin{bmatrix}\alpha^2 + a^2 & 0 \\ 0 & a^2 + \alpha^2\end{bmatrix}\end{aligned} \quad\quad\quad(16)

We have now only one constraint left, and have reduced things to a single degree of freedom

\begin{aligned}1 = \alpha^2 + a^2 \end{aligned} \quad\quad\quad(17)

Or

\begin{aligned}\alpha = (1 - a^2)^{1/2}\end{aligned} \quad\quad\quad(18)

We’ve already used $\pm$ to distinguish the roots of $\alpha = \pm b$, so here lets imply that this square root can take either positive or negative values, but that we are treating the sign of this the same whereever seen. Our transformation, employing $a$ as the free variable is now known to take any of the following forms

\begin{aligned}A &= \begin{bmatrix}(1 - a^2)^{1/2} & \mp a \\ a & \pm (1 - a^2)^{1/2} \\ \end{bmatrix} \\ &= \begin{bmatrix}0 & \pm 1 \\ (1)^{1/2} & 0 \\ \end{bmatrix} \\ &=\begin{bmatrix}\alpha & \beta \\ a & 0 \\ \end{bmatrix} \end{aligned} \quad\quad\quad(19)

The last of these (the $b=0$ case from earlier) was not considered, but doing so one finds that it produces nothing different from the second form of the transformation above. That leaves us with two possible forms of linear transformations that are allowable for the desired constraints, the first of which screams for a trigonometric parameterization.

For ${\left\lvert{a}\right\rvert} \le 0$ we can parameterize with $a = \sin\theta$. Should we allow complex valued linear transformations? If so $a = \cosh(\theta)$ is a reasonable way to parameterize the matrix for the $a > 0$ case. The complete set of allowable linear transformations in matrix form are now

\begin{aligned}A &= \begin{bmatrix}1^{1/2} \cos\theta & \mp \sin\theta \\ \sin\theta & \pm 1^{1/2} \cos\theta \\ \end{bmatrix} \\ A &= \begin{bmatrix}(-1)^{1/2} \sinh\theta & \mp \cosh\theta \\ \cosh\theta & \pm (-1)^{1/2} \sinh\theta \\ \end{bmatrix} \\ A &= \begin{bmatrix}0 & \pm 1 \\ 1^{1/2} & 0 \\ \end{bmatrix}\end{aligned} \quad\quad\quad(22)

There are really four different matrixes in each of the above. Removing all the shorthand for clarity we have finally

\begin{aligned}A \in \{&\begin{bmatrix}\cos\theta & - \sin\theta \\ \sin\theta & \cos\theta \\ \end{bmatrix}, \begin{bmatrix}-\cos\theta & - \sin\theta \\ \sin\theta & -\cos\theta \\ \end{bmatrix}, \begin{bmatrix}\cos\theta & \sin\theta \\ \sin\theta & - \cos\theta \\ \end{bmatrix}, \begin{bmatrix}-\cos\theta & \sin\theta \\ \sin\theta & \cos\theta \\ \end{bmatrix}, \\ & \begin{bmatrix}i \sinh\theta & -\cosh\theta \\ \cosh\theta & i \sinh\theta \\ \end{bmatrix}, \begin{bmatrix}-i \sinh\theta & -\cosh\theta \\ \cosh\theta & -i \sinh\theta \\ \end{bmatrix}, \begin{bmatrix}i \sinh\theta & \cosh\theta \\ \cosh\theta & - i \sinh\theta \\ \end{bmatrix}, \begin{bmatrix}-i \sinh\theta & \cosh\theta \\ \cosh\theta & i \sinh\theta \\ \end{bmatrix}, \\ &\begin{bmatrix}0 & 1 \\ 1 & 0 \\ \end{bmatrix},\begin{bmatrix}0 & 1 \\ -1 & 0 \\ \end{bmatrix},\begin{bmatrix}0 & -1 \\ 1 & 0 \\ \end{bmatrix},\begin{bmatrix}0 & -1 \\ -1 & 0 \\ \end{bmatrix}\}\end{aligned}

The last four possibilities are now seen to be redundant since they can be incorporated into the $\theta = \pm \pi/2$ cases of the real trig parameterizations where $\sin\theta = \pm 1$, and $\cos\theta = 0$. Employing a $\theta' = -\theta$ change of variables, we find that two of the hyperbolic parameterizations are also redundant and can express the reduced solution set as

\begin{aligned}A \in \{\pm \begin{bmatrix}\cos\theta & - \sin\theta \\ \sin\theta & \cos\theta \\ \end{bmatrix}, \pm \begin{bmatrix}\cos\theta & \sin\theta \\ \sin\theta & - \cos\theta \\ \end{bmatrix}, \pm \begin{bmatrix}i \sinh\theta & -\cosh\theta \\ \cosh\theta & i \sinh\theta \\ \end{bmatrix}, \pm \begin{bmatrix}i \sinh\theta & \cosh\theta \\ \cosh\theta & - i \sinh\theta \\ \end{bmatrix}\}\end{aligned} \quad\quad\quad(25)

I suspect this class of transformations has a name in the grand group classification scheme, but I don’t know what it is.