Central Limit Theorem.
Posted by peeterjoot on February 6, 2009
While doing the dishes tonight I listened to Prof Brad Osgood’s Fourier Series Lecture 10, on the Central Limit Theorem.
Without the video, this one was a bit hard to follow, but here are some notes for later followup. If I don’t write them down now trying to go through this on my own later may be tough.
First off he displays some repeated convolutions. A rect function convolved with itself gets smoother, and a bit bell curvish. Another convolution with a rect function gets more so, and after a few, it is “spookily bell curvish”.
He shows the same thing, and again I am imagining what he showed, a randomly generated function with lines interconnecting random points in some interval, and repeatly convolves this function with itself, again producing a bell curve like shape. “Spookie!”;)
Next is a brief mention of random variables, probability density functions, expectation values, variance, and so forth. With these ideas it is shown that given the pdfs for two identically distributed random variables, the probability that two of these is less than some value is shown to have the form of a convolution of the two. The process can be repeated, with the end result producing an n-fold convolution of pdfs.
Now, hearing that treatment without seeing it, at his machine gun speech pace, I didn’t know exactly what he had ended up writing down. It does sounds like it is straight forward enough to go through this, and I do seem to recall covering this eons ago back in school. I probably have this in a text book somewhere if I can’t figure it out on my own later.
With the background covered the finale is a Fourier transformation of the convolution, since the transform turns the convolution into a product in the frequency domain. I’ll have to try this myself for some simple cases with two, three, four pdfs convoluted, and I can probably see where he went with this. The last bit ended up being a Taylor series expansion of an exponential. I completely missed in my imagined watch along how it got to that point. In the end after discarding some higher order terms he gets a Gaussian, and therefore a Gaussian again with the inverse Fourier transform in the limit of infinite repeated convolutions of an arbitrary identically distributed pdfs (IIDPF or something was mentioned).
Now, why is this interestingly enough to care about? I have an intuition or suspicion that these ideas are behind the Heisenberg uncertainty principle. Certainly the idea that Fourier transform analysis of probability functions is a useful tool is particularly interesting in the context of Quantum Mechanics.
I think that I’m going to have to stop listening to educational podcasts for a while. I am getting behind keeping up with the ideas coming out, and it will get overwhelming before too long!