MLE of a Uniform Distribution28/2/2023 I noticed something surprising about the Maximum Likelihood Estimator (MLE) for a uniform distribution yesterday. Suppose we’re given sample $X’ = {x_1, x_2, … x_n}$ from a uniform distribution $X$ with parameters $a,b$. Then the MLE estimator for $a = min(X’)$, and $b = max(X’)$. [1] All straight forward so far. However, examining the estimators, we can also say with probability = 1 that $a < min(X’)$, and similarly that $b > max(X’)$. Isn't it strange that the MLE estimators are clearly less/more than the true values? So what can we do instead? (Since Gauss did a lot of the early work on MLE, here's a portrait of him as a young man. ) Source: https://commons.wikimedia.org/wiki/File:Bendixen_-_Carl_Friedrich_Gau%C3%9F,_1828.jpg Clearly, asymptotically $min(X’) -> a$ as $n -> inf$. But it’s interesting that the MLE method is unwilling to ‘extrapolate’ and provide us with an $a$ which is greater than the current minimum observed in the sample.
In order to do something along this lines, instead of looking at the MLE, we can instead generate an unbiased estimator, using the following: [2] $$\frac{(n+1)}{n} max(X’)$$ Using this estimator, we are 'projecting out' that sample by the scaling factor $\frac{(n+1)}{n}$, which feels very sensible to me. Intuitively I’m much more comfortable estimating $(a,b)$ using this unbiased estimator rather than the MLE. Yet, I guess I’ve internalised the idea that the MLE is the ‘best’ estimator to use for a given problem. Turns out this may not always be the case, in particular for small samples sizes, where the scaling factor may be quite material. [1] https://www.mathworks.com/help/stats/uniform-distribution-continuous.html [2] https://math.stackexchange.com/questions/2246222/unbiased-estimator-of-a-uniform-distribution |
AuthorI work as an actuary and underwriter at a global reinsurer in London. Categories
All
Archives
April 2024
|
Leave a Reply.