Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Parameter Estimation-Computing and Statistical Data Analysis-Lecture 08 Slides-Physics, Slides of Computational and Statistical Data Analysis

ML estimators not guaranteed to have any ‘optimal’ properties, (but in practice they’re very good). Parameter Estimation, Properties, Estimators, Mean, Variance, Likelihood function, Gaussian, Monte Carlo Method, Statistical Data Analysis, Lecture Slides, Glen Cowan, Physics Department, University of London, United Kingdom.

Typology: Slides

2011/2012

Uploaded on 03/08/2012

leyllin
leyllin 🇬🇧

4.3

(15)

242 documents

1 / 19

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
G. Cowan Lectures on Statistical Data Analysis 1
Statistical Data Analysis: Lecture 8
1 Probability, Bayes’ theorem, random variables, pdfs
2 Functions of r.v.s, expectation values, error propagation
3 Catalogue of pdfs
4 The Monte Carlo method
5 Statistical tests: general concepts
6 Test statistics, multivariate methods
7 Significance tests
8 Parameter estimation, maximum likelihood
9 More maximum likelihood
10 Method of least squares
11 Interval estimation, setting limits
12 Nuisance parameters, systematic uncertainties
13 Examples of Bayesian approach
14 tba
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13

Partial preview of the text

Download Parameter Estimation-Computing and Statistical Data Analysis-Lecture 08 Slides-Physics and more Slides Computational and Statistical Data Analysis in PDF only on Docsity!

Statistical Data Analysis: Lecture 8

1 Probability, Bayes’ theorem, random variables, pdfs 2 Functions of r.v.s, expectation values, error propagation 3 Catalogue of pdfs 4 The Monte Carlo method 5 Statistical tests: general concepts 6 Test statistics, multivariate methods 7 Significance tests 8 Parameter estimation, maximum likelihood 9 More maximum likelihood 10 Method of least squares 11 Interval estimation, setting limits 12 Nuisance parameters, systematic uncertainties 13 Examples of Bayesian approach 14 tba

Pre-lecture comments on problem sheet 7

Problem sheet 7 involves modifying some C++ programs to create a Fisher discriminant and neural network to separate two types of events (signal and background): Each event is characterized by 3 numbers: x , y and z. Each "event" (instance of x,y,z ) corresponds to a "row" in an n -tuple. (here, a 3-tuple). In ROOT, n -tuples are stored in objects of the TTree class.

Properties of estimators

If we were to repeat the entire measurement, the estimates from each would follow a pdf: large biased variance best We want small (or zero) bias (systematic error): → average of repeated measurements should tend to true value. And we want a small variance (statistical error): → small bias & variance are in general conflicting criteria

An estimator for the mean (expectation value)

Parameter: Estimator: We find: (‘sample mean’)

The likelihood function

Suppose the entire result of an experiment (set of measurements) is a collection of numbers x , and suppose the joint pdf for

the data x is a function that depends on a set of parameters θ:

Now evaluate this function with the data obtained and regard it as a function of the parameter(s). This is the likelihood function: ( x constant)

The likelihood function for i.i.d.*. data

Consider n independent observations of x : x 1 , ..., x n , where

x follows f ( x ; θ). The joint pdf for the whole data sample is:

In this case the likelihood function is ( x i constant)

  • i.i.d. = independent and identically distributed

ML example: parameter of exponential pdf

Consider exponential pdf, and suppose we have i.i.d. data, The likelihood function is

The value of τ for which L (τ) is maximum also gives the

maximum value of its logarithm (the log-likelihood function):

ML example: parameter of exponential pdf (2)

Find its maximum by setting → Monte Carlo test: generate 50 values

using τ = 1:

We find the ML estimate:

Example of ML: parameters of Gaussian pdf

Consider independent x 1 , ..., x n , with x i

~ Gaussian ( μ,σ

2 ) The log-likelihood function is

Example of ML: parameters of Gaussian pdf (2)

Set derivatives with respect to μ, σ

2 to zero and solve,

We already know that the estimator for μ is unbiased.

But we find, however, so ML estimator

for σ

2 has a bias, but b →0 for n →∞. Recall, however, that

is an unbiased estimator for σ

2 .

Variance of estimators from information inequality

The information inequality (RCF) sets a lower bound on the variance of any estimator (not only ML): Often the bias b is small, and equality either holds exactly or is a good approximation (e.g. large data sample limit). Then, Estimate this using the 2nd derivative of ln L at its maximum: Minimum Variance Bound (MVB)

Variance of estimators: graphical method

Expand ln L ( θ) about its maximum:

First term is ln L max , second term is zero, for third term use information inequality (assume equality): i.e.,

→ to get , change θ away from until ln L decreases by 1/2.

Wrapping up lecture 8

We’ve seen some main ideas about parameter estimation: estimators, bias, variance, and introduced the likelihood function and ML estimators. Also we’ve seen some ways to determine the variance (statistical error) of estimators: Monte Carlo method Using the information inequality Graphical Method Next we will extend this to cover multiparameter problems, variable sample size, histogram-based data, ...