Walker, Stephen G. (2006) Bayesian inference via a minimization rule. Sankhya: The Indian Journal of Statistics, 68 (4). pp. 542-553. ISSN 0972-7671. (The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided) (KAR id:10592)
The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided. | |
Official URL: http://sankhya.isical.ac.in/pdfs/68_4/68_4cont.htm... |
Abstract
In this paper, we consider the Bayesian posterior distribution as the solution
to a minimization rule, first observed by Zellner (1988). The expression to be
minimized is a mixture of two pieces, one piece involving the prior distribution,
which is minimized by the prior, and the other piece involves the data,
which is minimized by the measure putting all the mass on the maximum
likelihood estimator. From this perspective of the posterior distribution,
Bayesian model selection and the search for an objective prior distribution,
can be viewed in a way which is different from usual Bayesian approaches.
Item Type: | Article |
---|---|
Uncontrolled keywords: | Kullback-Leibler divergence, maximum likelihood estimator, minimization, model selection, objective prior, prior, posterior. |
Subjects: | Q Science > QA Mathematics (inc Computing science) |
Divisions: | Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Mathematics, Statistics and Actuarial Science |
Depositing User: | Judith Broom |
Date Deposited: | 11 Jul 2008 14:08 UTC |
Last Modified: | 16 Nov 2021 09:49 UTC |
Resource URI: | https://kar.kent.ac.uk/id/eprint/10592 (The current URI for this page, for reference purposes) |
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):