Walker, S.G. (2006) Bayesian inference via a minimization rule. Sankhya: The Indian Journal of Statistics, 68 (4). pp. 542-553. ISSN 09727671.
|The full text of this publication is not available from this repository. (Contact us about this Publication)|
In this paper, we consider the Bayesian posterior distribution as the solution to a minimization rule, first observed by Zellner (1988). The expression to be minimized is a mixture of two pieces, one piece involving the prior distribution, which is minimized by the prior, and the other piece involves the data, which is minimized by the measure putting all the mass on the maximum likelihood estimator. From this perspective of the posterior distribution, Bayesian model selection and the search for an objective prior distribution, can be viewed in a way which is different from usual Bayesian approaches.
|Uncontrolled keywords:||Kullback-Leibler divergence, maximum likelihood estimator, minimization, model selection, objective prior, prior, posterior.|
|Subjects:||Q Science > QA Mathematics (inc Computing science)|
|Divisions:||Faculties > Science Technology and Medical Studies > School of Mathematics Statistics and Actuarial Science > Statistics|
|Depositing User:||Judith Broom|
|Date Deposited:||11 Jul 2008 14:08|
|Last Modified:||14 Jan 2010 14:41|
|Resource URI:||http://kar.kent.ac.uk/id/eprint/10592 (The current URI for this page, for reference purposes)|
- Depositors only (login required):