Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Lecture 14: Bayesian Data Analysis - Probability, Bayes' theorem, Error Propagation, Slides of Computational and Statistical Data Analysis

A part of lecture 14 from g. Cowan's 'lectures on statistical data analysis.' it covers topics such as probability, bayes' theorem, random variables, expectation values, error propagation, the monte carlo method, statistical tests, and the bayesian approach. The lecturer discusses the concept of a typical fitting problem, the use of bayes' theorem for integrating over bias, and the importance of considering the 'error on the error.' the document also introduces the gamma distribution as a model for the error-on-error function and discusses the implications of using non-gaussian priors for bias.

Typology: Slides

2011/2012

Uploaded on 03/08/2012

leyllin
leyllin 🇬🇧

4.3

(15)

242 documents

1 / 21

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
G. Cowan
Lectures on Statistical Data Analysis
Lecture 14 page 1
Statistical Data Analysis: Lecture 14
1 Probability, Bayes’ theorem
2 Random variables and probability densities
3 Expectation values, error propagation
4 Catalogue of pdfs
5 The Monte Carlo method
6 Statistical tests: general concepts
7 Test statistics, multivariate methods
8 Goodness-of-fit tests
9 Parameter estimation, maximum likelihood
10 More maximum likelihood
11 Method of least squares
12 Interval estimation, setting limits
13 Nuisance parameters, systematic uncertainties
14 Examples of Bayesian approach
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15

Partial preview of the text

Download Lecture 14: Bayesian Data Analysis - Probability, Bayes' theorem, Error Propagation and more Slides Computational and Statistical Data Analysis in PDF only on Docsity!

Statistical Data Analysis: Lecture 14

1 Probability, Bayes’ theorem 2 Random variables and probability densities 3 Expectation values, error propagation 4 Catalogue of pdfs 5 The Monte Carlo method 6 Statistical tests: general concepts 7 Test statistics, multivariate methods 8 Goodness-of-fit tests 9 Parameter estimation, maximum likelihood 10 More maximum likelihood 11 Method of least squares 12 Interval estimation, setting limits 13 Nuisance parameters, systematic uncertainties 14 Examples of Bayesian approach

A typical fitting problem

Given measurements: and (usually) covariances: Predicted value: control variable parameters bias Often take: Minimize

Equivalent to maximizing L ( θ) ~ e

χ 2 / , i.e., least squares same as maximum likelihood using a Gaussian likelihood function. expectation value

The error on the error

Some systematic errors are well determined Error from finite Monte Carlo sample Some are less obvious Do analysis in n ‘equally valid’ ways and extract systematic error from ‘spread’ in results. Some are educated guesses Guess possible size of missing terms in perturbation series; vary renormalization scale Can we incorporate the ‘error on the error’? (cf. G. D’Agostini 1999; Dose & von der Linden 1999)

Motivating a non-Gaussian prior π

b

( b )

Suppose now the experiment is characterized by where s i is an (unreported) factor by which the systematic error is over/under-estimated.

Assume correct error for a Gaussian π

b ( b ) would be s i

i sys , so

Width of σ

s ( s i ) reflects ‘error on the error’.

Prior for bias π

b

( b ) now has longer tails

Gaussian ( σ s

= 0) P (| b | > 4 σ

sys

) = 6.3 × 10

σ s

= 0.5 P (| b | > 4 σ

sys

b

A simple test

Suppose fit effectively averages four measurements.

Take σ

sys

stat = 0.1, uncorrelated.

Case #1: data appear compatible Posterior p ( μ| y ):

Usually summarize posterior p ( μ | y ) with mode and standard deviation: experiment measurement

Goodness-of-fit vs. size of error

In LS fit, value of minimized χ

2 does not affect size of error on fitted parameter. In Bayesian analysis with non-Gaussian prior for systematics,

a high χ

2 corresponds to a larger error (and vice versa). 2000 repetitions of

experiment, σ

s

here no actual bias.

2

μ from least squares post- erior

Is this workable in practice?

Should to generalize to include correlations

Prior on correlation coefficients ~ π( ρ)

(Myth: ρ = 1 is “conservative”)

Can separate out different systematic for same measurement

Some will have small σ

s , others larger. Remember the “if-then” nature of a Bayesian result: We can (should) vary priors and see what effect this has on the conclusions.

Assessing Bayes factors

One can use the Bayes factor much like a p -value (or Z value).

There is an “established” scale, analogous to HEP's 5 σ rule:

B

10 Evidence against H 0


1 to 3 Not worth more than a bare mention 3 to 20 Positive 20 to 150 Strong

150 Very strong Kass and Raftery, Bayes Factors , J. Am Stat. Assoc 90 (1995) 773.

Rewriting the Bayes factor

Suppose we have models H i , i = 0, 1, ..., each with a likelihood and a prior pdf for its internal parameters so that the full prior is where is the overall prior probability for H i

The Bayes factor comparing H i and H j can be written

Numerical determination of Bayes factors

Both numerator and denominator of B ij are of the form ‘marginal likelihood’ Various ways to compute these, e.g., using sampling of the posterior pdf (which we can do with MCMC). Harmonic Mean (and improvements) Importance sampling Parallel tempering (~thermodynamic integration) ... See e.g.

Harmonic mean estimator

E.g., consider only one model and write Bayes theorem as:

π(θ) is normalized to unity so integrate both sides,

Therefore sample θ from the posterior via MCMC and estimate m

with one over the average of 1/ L (the harmonic mean of L ). posterior expectation

Importance sampling

Need pdf f ( θ) which we can evaluate at arbitrary θ and also

sample with MC. The marginal likelihood can be written

Best convergence when f ( θ) approximates shape of L ( x | θ) π( θ).

Use for f ( θ) e.g. multivariate Gaussian with mean and covariance

estimated from posterior (e.g. with MINUIT).

Bayes factor computation discussion

Also tried method of parallel tempering; see note on course web page and also Harmonic mean OK for very rough estimate. I had trouble with all of the methods based on posterior sampling. Importance sampling worked best, but may not scale well to higher dimensions. Lots of discussion of this problem in the literature, e.g.,