Most undergraduate probability textbooks make extensive use of the result that each random variable has a unique Moment Generating Function.
For example, the proof that the sum of two Poisson Random Variables is also a Poisson Random Variable, is much easier if we can invoke this result. Sum of Two Poisson Distributions:
Suppose $X$ and $Y$ are independent Poisson Distributions with parameters $\lambda_1$ and $\lambda_2$.
We know that the $MGF$ of a Poisson Distribution with parameter $\lambda$ is $e^{ \lambda ( e^t  1)) }$.
We can now use the result that:
$MGF_{X+Y} = MGF_X * MGF_Y = e^{ \lambda_1 ( e^t  1)) } * e^{ \lambda_2 ( e^t  1)) } = e^{ (\lambda_1 +\lambda_2) ( e^t  1)) }$
Which is the $MGF$ of a Poisson Distribution with distribution $\lambda_1 + \lambda_2$ proving the result. Do we need a proof? Call me pedantic, but I never liked the fact that this uniqueness result is normally stated without proof. For me, there was always an elegance to a textbook that began with definitions and axioms and proved everything that it required on the way. There also seems to be something disingenuous to me in using a very powerful result to prove a fairly trivial special case without really understanding the result you are using. You never really understand why anything is true, just that it is true. Not only is this proof not often given, I had to look pretty hard to find it anywhere.. The general proof for all random variables requires measure theory and is quite complex, therefore I thought I'd just write up the result for discrete random variables. So here is a proof of the uniqueness of $MGFs$ for discrete random variables. Uniqueness of MGFs Suppose $X$ and $Y$ are discrete random variables over $\{ x_0 , x_1, x_2, ... \} $. Further suppose that: That is: Then, $$\sum_{i=0}^{\infty} e^{tx_i} f_X (x_i)  \sum_{j=0}^{ \infty } e^{tx_j} f_Y (x_j) = 0$$ Changing the range for the second sum: $$\sum_{i=0}^{\infty} e^{tx_i} f_X (x_i)  \sum_{i=0}^{ \infty } e^{tx_i} f_Y (x_i) = 0 $$ Bringing the two sums together: $$\sum_{i=0}^{\infty} ( e^{tx_i} f_X (x_i)  e^{ty_i} f_Y (x_i) ) = 0 $$ Rearranging: $$\sum_{i=0}^{\infty} e^{tx_i}(f_X (x_i)  f_Y (x_i) ) = 0 $$ Since each $e^tx$ is strictly greater than 0, and the sum is equal to 0, this implies that:
$f_X (x_i) = f_Y (x_i), \forall x_i $
Which proves the result.
4 Comments
Anders Berthelsen
28/11/2016 03:53:12 pm
Not sure I get the last step of your proof. How can you know that fXfY is always positive? If it changes sign, then e^(tx) positive, does not necessarily imply that fXfY = 0.
Reply
Arash Jamshidi
7/4/2019 03:35:35 pm
I have the same question as Anders Berthelsen. how can you know fXfY is always positive? from my point of view another possibility is that some of fXfY s are negative and some of them are positive and they add to 0. please explain further.
Reply
Arash Jamshidi
7/4/2019 03:36:58 pm
i meant "how can you know fXfY is always zero" *
Reply
Arash Jamshidi
7/4/2019 04:36:31 pm
nevermind, i got it. all coefficients of polynomial function that equal zero, equals zero. for anyone wondering why, you can easily prove it by taking derivative n times (the last coefficients will equal zero now work backward and all coefficients are zero.)
Reply
Leave a Reply. 
AuthorI work as a pricing actuary at a reinsurer in London. Categories
All
Archives
February 2020
