Peeter Joot's (OLD) Blog.

Math, physics, perl, and programming obscurity.

A hoop and spring oscillator problem.

Posted by peeterjoot on June 19, 2010

[Click here for a PDF of this post with nicer formatting]

Motivation.

Nolan was attempting to setup and solve the equations for the following system (\ref{fig:hoopSpring})

Coupled hoop and spring

One mass is connected between two springs to a bar. That bar moves up and down as forced by the motion of the other mass along a immovable hoop. While Nolan didn’t include any gravitational force in his potential terms (ie: system lying on a table perhaps) it doesn’t take much more to include that, and I’ll do so. I also include the distance L to the center of the hoop, which I believe required.

Guts

The Lagrangian can be written by inspection. Writing x = x_1, and x_2 = R \sin\theta, we have

\begin{aligned}\mathcal{L} = \frac{1}{{2}} m_1 \dot{x}^2 + \frac{1}{{2}} m_2 R^2 \dot{\theta}^2 - \frac{1}{{2}} k_1 x^2 - \frac{1}{{2}} k_2 ( L + R \sin\theta - x )^2- m_1 g x- m_2 g ( L + R \sin\theta).\end{aligned} \hspace{\stretch{1}}(2.1)

Evaluation of the Euler-Lagrange equations gives

\begin{aligned}m_1 \dot{d}{x} &= - k_1 x + k_2 ( L + R \sin\theta - x ) - m_1 g \\ m_2 R^2 \dot{d}{\theta} &= - k_2 ( L + R \sin\theta - x ) R \cos\theta - m_2 g R \cos\theta,\end{aligned} \hspace{\stretch{1}}(2.2)

or

\begin{aligned}\dot{d}{x} &= -x \frac{k_1 + k_2}{m_1} + \frac{k_2 R \sin\theta}{m_1} - g + \frac{k_2 L }{m_1} \\ \dot{d}{\theta} &= - \frac{1}{{R}}\left( \frac{k_2}{m_2} \left( L + R \sin\theta - x \right) +g \right) \cos\theta.\end{aligned} \hspace{\stretch{1}}(2.4)

Just like any other coupled pendulum system, this one is non-linear. There’s no obvious way to solve this in closed form, but we could determine a solution in the neighborhood of a point (x, \theta) = (x_0, \theta_0). Let’s switch our dynamical variables to ones that express the deviation from the initial point \delta x = x - x_0, and \delta \theta = \theta - \theta_0, with u = (\delta x)', and v = (\delta \theta)'. Our system then takes the form

\begin{aligned}u' &= f(x,\theta) =-x \frac{k_1 + k_2}{m_1} + \frac{k_2 R \sin\theta}{m_1} - g + \frac{k_2 L }{m_1} \\ v' &= g(x,\theta) =- \frac{1}{{R}}\left( \frac{k_2}{m_2} \left( L + R \sin\theta - x \right) +g \right) \cos\theta \\ (\delta x)' &= u \\ (\delta \theta)' &= v.\end{aligned} \hspace{\stretch{1}}(2.6)

We can use a first order Taylor approximation of the form f(x, \theta) = f(x_0, \theta_0) + f_x(x_0,\theta_0) (\delta x) + f_\theta(x_0,\theta_0) (\delta \theta). So, to first order, our system has the approximation

\begin{aligned}u' &= -x_0 \frac{k_1 + k_2}{m_1} + \frac{k_2 R \sin\theta_0}{m_1} - g + \frac{k_2 L }{m_1} -(\delta x) \frac{k_1 + k_2}{m_1} + \frac{k_2 R \cos\theta_0}{m_1} (\delta \theta)\\ v' &= - \frac{1}{{R}}\left( \frac{k_2}{m_2} \left( L + R \sin\theta_0 - x_0 \right) +g \right) \cos\theta_0+ \frac{k_2 \cos\theta_0}{m_2 R} (\delta x)- \frac{1}{{R}}\left( \frac{k_2}{m_2} \left( \left( L - x_0 \right) \sin\theta_0 + R \right) + g \sin\theta_0 \right) (\delta \theta)\\ (\delta x)' &= u \\ (\delta \theta)' &= v.\end{aligned} \hspace{\stretch{1}}(2.10)

This would be tidier in matrix form with \mathbf{x} = (u, v, \delta x, \delta \theta)

\begin{aligned}\mathbf{x}' &=\begin{bmatrix}-x_0 \frac{k_1 + k_2}{m_1} + \frac{k_2 R \sin\theta_0}{m_1} - g + \frac{k_2 L }{m_1} \\ - \frac{1}{{R}}\left( \frac{k_2}{m_2} \left( L + R \sin\theta_0 - x_0 \right) +g \right) \cos\theta_0 \\ 0 \\ 0\end{bmatrix}+\begin{bmatrix}0 & 0 &-\frac{k_1 + k_2}{m_1} & \frac{k_2 R \cos\theta_0}{m_1} \\ 0 & 0 & \frac{k_2 \cos\theta_0}{m_2 R} &- \frac{1}{{R}}\left( \frac{k_2}{m_2} \left( \left( L - x_0 \right) \sin\theta_0 + R \right) + g \sin\theta_0 \right) \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ \end{bmatrix}\mathbf{x}.\end{aligned} \hspace{\stretch{1}}(2.14)

This reduces the problem to the solutions of first order equations of the form

\begin{aligned}\mathbf{x}' = \mathbf{a} + \begin{bmatrix}0 & A \\ I & 0\end{bmatrix}\mathbf{x} = \mathbf{a} + \mathbf{B} \mathbf{x}\end{aligned} \hspace{\stretch{1}}(2.15)

where \mathbf{a}, and A are constant matrices. Such a matrix equation has the solution

\begin{aligned}\mathbf{x} = e^{B t} \mathbf{x}_0 + (e^{Bt} - I) B^{-1} \mathbf{a},\end{aligned} \hspace{\stretch{1}}(2.16)

but the zeros in B should allow the exponential and inverse to be calculated with less work. That inverse is readily verified to be

\begin{aligned}B^{-1} =\begin{bmatrix}0 & I \\ A^{-1} & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.17)

It is also not hard to show that

\begin{aligned}B^{2n} = \begin{bmatrix}A^n & 0 \\ 0 & A^n\end{bmatrix} \\ B^{2n+1} = \begin{bmatrix}0 & A^{n+1} \\ A^n & 0\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.18)

Together this allows for the power series expansion

\begin{aligned}e^{Bt} &=\begin{bmatrix}\cosh(t \sqrt{A}) & \sinh(t \sqrt{A}) \\ \sinh(t \sqrt{A}) \frac{1}{{\sqrt{A}}} & \cosh(t \sqrt{A})\end{bmatrix}.\end{aligned} \hspace{\stretch{1}}(2.20)

All of the remaining sub matrix expansions should be straightforward to calculate provided the eigenvalues and vectors of A are calculated. Specifically, suppose that we have

\begin{aligned}A = U \begin{bmatrix}\lambda_1 & 0 \\ 0 & \lambda_2 \end{bmatrix}U^{-1}.\end{aligned} \hspace{\stretch{1}}(2.21)

Then all the perhaps non-obvious functions of matrixes expand to just

\begin{aligned}A^{-1} &= U \begin{bmatrix}\lambda_1^{-1} & 0 \\ 0 & \lambda_2^{-1} \end{bmatrix}U^{-1} \\ \sqrt{A} &= U \begin{bmatrix}\sqrt{\lambda_1} & 0 \\ 0 & \sqrt{\lambda_2}\end{bmatrix}U^{-1} \\ \cosh(t \sqrt{A}) &= U \begin{bmatrix}\cosh( t \sqrt{\lambda_1} ) & 0 \\ 0 & \cosh( t \sqrt{\lambda_2} )\end{bmatrix}U^{-1} \\ \sinh(t \sqrt{A}) &= U \begin{bmatrix}\sinh( t \sqrt{\lambda_1} ) & 0 \\ 0 & \sinh( t \sqrt{\lambda_2} )\end{bmatrix}U^{-1} \\ \sinh(t \sqrt{A}) \frac{1}{{\sqrt{A}}} &= U \begin{bmatrix}\sinh( t \sqrt{\lambda_1} )/\sqrt{\lambda_1} & 0 \\ 0 & \sinh( t \sqrt{\lambda_2} )/\sqrt{\lambda_2}\end{bmatrix}U^{-1}.\end{aligned} \hspace{\stretch{1}}(2.22)

An interesting question would be how are the eigenvalues and eigenvectors changed with each small change to the initial position \mathbf{x}_0 in phase space. Can these be related to each other?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: