Uniqueness of Moment Generating Functions18/10/2016
Most undergraduate probability textbooks make extensive use of the result that each random variable has a unique Moment Generating Function. In particular, we can use this result to demonstrate the effect of adding or multiplying random variables. For example, the proof that the sum of two Poisson Random Variables is also a Poisson Random Variable (with mean equal to the sum of the means of the two Poissons) is much easier if we can invoke this result.
Proof for the sum of two Poisson distributions:
Suppose $X$ and $Y$ are independent Poisson Distributions with parameters $\lambda_1$ and $\lambda_2$.
We know that the $MGF$ of a Poisson Distribution with parameter $\lambda$ is $e^{ \lambda ( e^t - 1)) }$.
We can now use the result that:
$MGF_{X+Y} = MGF_X * MGF_Y = e^{ \lambda_1 ( e^t - 1)) } * e^{ \lambda_2 ( e^t - 1)) } = e^{ (\lambda_1 +\lambda_2) ( e^t - 1)) }$
Which is the $MGF$ of a Poisson Distribution with distribution $\lambda_1 + \lambda_2$ proving the result. Do we need a proof? Call me pedantic, but I never liked the fact that this uniqueness result is usually stated without proof. For me, there was always an elegance to a textbook that began with definitions and axioms, and possibly a handful of weak results which were taken as given, and then proved everything required on the way. In my experience Probability and Stats books seem particularly prone to not being self-contained and rigorous in this way. I suspect it has something to do with the fact that probability and stats, if done 100% rigorously, are both extremely technical and difficult! It would be a shame if we had to wait until we had mastered complex analysis and measure theory before we could learn about predicting the number of black and white balls in an urn. (I'm joking there are actually interesting and useful parts to probability once you get past of those boring exercises about urns) I think my discomfort is added to by the fact that there is also something slightly disingenuous in using a very powerful result to prove a fairly trivial special case without really understanding the general result you are using. If you do this too much, you never really understand why anything is true, just that it is true. Not only is this proof not often given, I had to look pretty hard to find it anywhere.. The general proof for all random variables requires either measure theory or complex analysis and is quite involved, therefore I thought I'd just write up the result for discrete random variables. So here is a proof of the uniqueness of $MGFs$ for discrete random variables over the support $\mathbb{N}_0$. Uniqueness of MGFs Suppose $X$ and $Y$ are discrete random variables over $\{ 0 , 1, 2, ... \} $. Further suppose that: That is: Then, $$\sum_{i=0}^{\infty} e^{tx_i} f_X (i) - \sum_{j=0}^{ \infty } e^{tj} f_Y (j) = 0$$ Changing the range for the second sum: $$\sum_{i=0}^{\infty} e^{ti} f_X (i) - \sum_{i=0}^{ \infty } e^{ti} f_Y (i) = 0 $$ Bringing the two sums together: $$\sum_{i=0}^{\infty} ( e^{ti} f_X (i) - e^{ti} f_Y (i) ) = 0 $$ Rearranging: $$\sum_{i=0}^{\infty} e^{ti}(f_X (i) - f_Y (i) ) = 0 $$ We can now think of this as a power series with coefficients: $$g(i) = f_X (i) - f_Y (i)$$ i.e. $$\sum_{i=0}^{\infty} (e^t)^i g(i) = 0 $$ Which allows us to use the result that a power series which is equal to $0$ in some interval (in this case $\mathbb{R}$), must have coefficients equal to $0$ on the interval. To see this, we consider the $nth$ derivative, which allows us to recover the $nth$ coefficient.: $$f^{(n)}(0) = \sum_{k=n}^\infty \frac{k!}{(k-n)!}c_k0^{k-n} = n!c_{n}$$ Which gives us our result since $g(i)=0$ implies that the two functions $f_X$ and $f_Y$ are equal. |
AuthorI work as an actuary and underwriter at a global reinsurer in London. Categories
All
Archives
April 2024
|
Not sure I get the last step of your proof. How can you know that fX-fY is always positive? If it changes sign, then e^(tx) positive, does not necessarily imply that fX-fY = 0.
I have the same question as Anders Berthelsen. how can you know fX-fY is always positive? from my point of view another possibility is that some of fX-fY s are negative and some of them are positive and they add to 0. please explain further.
Thanks!
i meant "how can you know fX-fY is always zero" *
nevermind, i got it. all coefficients of polynomial function that equal zero, equals zero. for anyone wondering why, you can easily prove it by taking derivative n times (the last coefficients will equal zero now work backward and all coefficients are zero.)
again, thanks for your proof.
Leave a Reply.