Jeffreys prior

From Wikiprevia
Jump to: navigation, search

Bayesians soon realized that the use of naive rules to obtain objective priors (eg. constant) leads to inconsistency issues as a consequence of the different expressions (parameterizations) that a model can take. This observation led to Harold Jeffreys to propose, in [1], a rule to derive objective priors that was consistent with changes on parameterizations (i.e. invariant under one-to-one parameterizations) and that we nowadays call the Jeffreys prior.

For a parameter it takes the form:

,

where is the Fisher's information matrix.

For univariate parameters and, under general conditions, Jeffreys prior and reference prior coincides.

Independence Jeffreys prior

In the case of multivariate parameters, Jeffreys prior may lead to unsatisfactory results as Jeffreys himself noticed. One alternative route that may provide better priors is what is called the independence Jeffreys prior whose use was advocated by Jeffreys in the normal iid problem with both parameters unknown.

This second Jeffreys rule applies in the situation where and is a mean parameter and is a variance parameter. In this case, the rule follows assuming that both parameters are independent and is obtained as the product of i) applying the rule to as if was know and ii) applying the rule to as if was known.

References

  1. Jeffreys, H. (1961), Theory of Probability (3rd edition), Oxford University Press, London.