8.3. MaxEnt for deriving some probability distributions#
Here we derive some standard probability distribution by maximizing the entropy subject to constraints derived from information that we have.
Example 1: the Gaussian#
First we take the constraint
We also have the normalization constraint:
So we maximize:
We will assume a uniform m(x).
Step 1: differentiate with respect to p(x). What do you get?
Step 2: set the functional derivative equal to 0. Show that the solution is:
where \({\cal N}=e^{-1-\lambda_0}\).
Step 3a: Now, we impose the constraints. First, use the fact that \(\int_{-\infty}^{\infty} \exp(-y^2) \, dy=\sqrt{\pi}\) to fix \({\cal N}\) (and \(\lambda_0\)).
Step 3b: Second, compute \(\int_{-\infty}^{\infty} y^2 \exp(-y^2) \, dy\), and use the results to show \(\lambda_1 = \frac{1}{2 \sigma^2}\).
Example 2: the Poisson distribution#
Now we will take a constraint on the mean (first moment):
As usual, we also have the normalization constraint:
So we maximize:
We will again assume a uniform m(x).
Go through the steps as you did in the first example.
You should obtain the Poisson distribution:
Third example: log normal distribution#
Suppose the constraint is on the variance of \(\ln x\), i.e.,
Change variables to \(y=\log(x/x_0)\). What is the constraint in terms of \(y\)?
Now maximize the entropy, subject to this constraint, and, of course, the normalization constraint.
You should obtain the log-normal distribution:
When do you think it would make sense to say that we know the variance of \(\log(x)\), rather than the variance of \(x\) itself?
Fourth example: l1-norm#
Finally, we take the constraint on the mean absolute value of \(x-\mu\): \(\langle |x-\mu| \rangle=\epsilon\).
This constraint is written as:
Use the uniform measure, and go through the steps once again, to show that: