# Jeffreys prior

Bayesians soon realized that the use of naive rules to obtain objective priors (eg. constant) leads to inconsistency issues as a consequence of the different expressions (parameterizations) that a model can take. This observation led to Harold Jeffreys to propose, in , a rule to derive objective priors that was consistent with changes on parameterizations (i.e. invariant under one-to-one parameterizations) and that we nowadays call the Jeffreys prior.

For a parameter ${\boldsymbol {\theta }}=(\theta _{1},\theta _{2},\ldots ,\theta _{k})$ it takes the form:

$\pi ({\boldsymbol {\theta }})\propto \,\mid {\mathbf {I}}({\boldsymbol {\theta }})\mid ^{1/2}$ ,

where ${\mathbf {I}}({\boldsymbol {\theta }})$ is the Fisher's information matrix.

For univariate parameters and, under general conditions, Jeffreys prior and reference prior coincides.

## Independence Jeffreys prior

In the case of multivariate parameters, Jeffreys prior may lead to unsatisfactory results as Jeffreys himself noticed. One alternative route that may provide better priors is what is called the independence Jeffreys prior whose use was advocated by Jeffreys in the normal iid problem with both parameters unknown.

This second Jeffreys rule applies in the situation where ${\boldsymbol {\theta }}=({\boldsymbol {\theta }}_{1},{\boldsymbol {\theta }}_{2})$ and ${\boldsymbol {\theta }}_{1}$ is a mean parameter and ${\boldsymbol {\theta }}_{2}$ is a variance parameter. In this case, the rule follows assuming that both parameters are independent and $\pi ({\boldsymbol {\theta }})$ is obtained as the product of i) applying the rule to ${\boldsymbol {\theta }}_{1}$ as if ${\boldsymbol {\theta }}_{2}$ was know and ii) applying the rule to ${\boldsymbol {\theta }}_{2}$ as if ${\boldsymbol {\theta }}_{1}$ was known.