











Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
ML estimators not guaranteed to have any ‘optimal’ properties, (but in practice they’re very good). Parameter Estimation, Properties, Estimators, Mean, Variance, Likelihood function, Gaussian, Monte Carlo Method, Statistical Data Analysis, Lecture Slides, Glen Cowan, Physics Department, University of London, United Kingdom.
Typology: Slides
1 / 19
This page cannot be seen from the preview
Don't miss anything!
1 Probability, Bayes’ theorem, random variables, pdfs 2 Functions of r.v.s, expectation values, error propagation 3 Catalogue of pdfs 4 The Monte Carlo method 5 Statistical tests: general concepts 6 Test statistics, multivariate methods 7 Significance tests 8 Parameter estimation, maximum likelihood 9 More maximum likelihood 10 Method of least squares 11 Interval estimation, setting limits 12 Nuisance parameters, systematic uncertainties 13 Examples of Bayesian approach 14 tba
Problem sheet 7 involves modifying some C++ programs to create a Fisher discriminant and neural network to separate two types of events (signal and background): Each event is characterized by 3 numbers: x , y and z. Each "event" (instance of x,y,z ) corresponds to a "row" in an n -tuple. (here, a 3-tuple). In ROOT, n -tuples are stored in objects of the TTree class.
If we were to repeat the entire measurement, the estimates from each would follow a pdf: large biased variance best We want small (or zero) bias (systematic error): → average of repeated measurements should tend to true value. And we want a small variance (statistical error): → small bias & variance are in general conflicting criteria
Parameter: Estimator: We find: (‘sample mean’)
Suppose the entire result of an experiment (set of measurements) is a collection of numbers x , and suppose the joint pdf for
Now evaluate this function with the data obtained and regard it as a function of the parameter(s). This is the likelihood function: ( x constant)
Consider n independent observations of x : x 1 , ..., x n , where
In this case the likelihood function is ( x i constant)
Consider exponential pdf, and suppose we have i.i.d. data, The likelihood function is
maximum value of its logarithm (the log-likelihood function):
Find its maximum by setting → Monte Carlo test: generate 50 values
We find the ML estimate:
Consider independent x 1 , ..., x n , with x i
2 ) The log-likelihood function is
2 to zero and solve,
But we find, however, so ML estimator
2 has a bias, but b →0 for n →∞. Recall, however, that
2 .
The information inequality (RCF) sets a lower bound on the variance of any estimator (not only ML): Often the bias b is small, and equality either holds exactly or is a good approximation (e.g. large data sample limit). Then, Estimate this using the 2nd derivative of ln L at its maximum: Minimum Variance Bound (MVB)
First term is ln L max , second term is zero, for third term use information inequality (assume equality): i.e.,
We’ve seen some main ideas about parameter estimation: estimators, bias, variance, and introduced the likelihood function and ML estimators. Also we’ve seen some ways to determine the variance (statistical error) of estimators: Monte Carlo method Using the information inequality Graphical Method Next we will extend this to cover multiparameter problems, variable sample size, histogram-based data, ...