Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Method of Maximum Likelihood - Stochastic Hydrology - Lecture Notes, Study notes of Mathematical Statistics

The main points i the stochastic hydrology are listed below:Method of Maximum Likelihood, Likelihood Function, Estimates of Parameter, Chebyshev Inequality, Irrespective of Probability Distribution, Moments and Expectation, Covariance, Degree of Association

Typology: Study notes

2012/2013

Uploaded on 04/20/2013

sathyanarayana
sathyanarayana 🇮🇳

4.4

(21)

140 documents

1 / 41

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Method of Maximum Likelihood
The likelihood function is constructed as,
L = f(x1;θ1;θ2 θm) x f(x2;θ1;θ2 θm) x f(xn;θ1;θ2 θm)
Maximize the likelihood function
Solving the ʻmʼ equations, the ʻmʼ parameters are
estimated
3%
( )
1
1
, ,........
n
im
ifx
θθ
=
=Π
0
i
Li
θ
=
Docsity.com
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29

Partial preview of the text

Download Method of Maximum Likelihood - Stochastic Hydrology - Lecture Notes and more Study notes Mathematical Statistics in PDF only on Docsity!

Method of Maximum Likelihood

  • The likelihood function is constructed as,

L = f(x 1

;θ 1

;θ 2

…θ m

) x f(x 2

;θ 1

;θ 2

…θ m

) x f(x n

;θ 1

;θ 2

… θ m

  • Maximize the likelihood function
  • Solving the m equations, the m parameters are

estimated

3

1 1

n

i m i

f x θ θ

=

i

L

i

θ

Example-

Obtain the maximum likelihood estimates of the parameter

β in the pdf

4

2 2

x

f x x e x

β

β

β

π

( )

2 2 2

1 2

2

1

2

1

2 2 2

1 2

/

2

1

/2 (^) /2 2

1

n n i i n i i x x^ x

n

n n x

n n

i i

n^ x

n n^ n n

i i

L x e x e x e

x e

x e

β β^ β

β

β

β β β

β β β β

π π π

β

β

π

β π

=

=

− −^ −

=

  • (^) −

=

= ×
⎜ ⎟ ⎜^ ⎟
⎝ ⎠ ⎝^ ⎠

Chebyshev Inequality

  • Chebyshev inequality states that a single observation

selected at random from any probability distribution will

deviate more than kσ from mean μ with a probability

less than or equal to 1/k

2 .

(places an upper bound on the probability for deviation

from the mean.)

  • Irrespective of probability distribution

6

2

P X k

k

⎡ (^) − μ ≥ σ⎤ ≤

Example-

The mean annual stream flow of a river is 135 Mm

3 and

standard deviation is 23.8 Mm

3 .What is the maximum

probability that the flow in a year will deviate more than

45 Mm

3 from the mean.

Applying Chebyshev inequality,

kσ = 45

k x 23.8 = 45

k = 1.

2

7

2

P X 45 P X 1.

k

2

P X k

k

⎡ − μ ≥ σ⎤ ≤

Covariance

  • Covariance of X and Y
  • Also denoted as σ X,Y

or Cov(X, Y)

  • σ X,Y

= Cov(X, Y) = 0, if X and Y are independent

  • The converse may not be necessarily true
  • Sample estimate for population covariance is given by

9

( )( )

( )( )

1,

( , ) x y

x y

x y f x y dx dy

E x y

μ μ μ

μ μ

∞ ∞

−∞ −∞

= − −

⎡ ⎤ = − −

⎣ ⎦

∫ ∫

( )( )

1

,

1

n

i i

i

X Y

x x y y

s

n

=

− −

=

Correlation

  • Correlation is a measure of degree of association

between two rvs X and Y

  • Correlation is normalized covariance.
  • ρ X,Y

= 0, if X and Y are independent

  • The converse may is not necessarily true
  • Sample estimate correlation coefficient is given by

10

,

,

X Y

X Y

X Y

σ

ρ

σ σ

=

,

,

X Y

X Y

X Y

s

r

s s

=

-1 < ρ X,Y

Correlation Coefficient

12

  • r =0.
  • Points are scattered
  • Existence of some

stochastic dependence

x

y

r =0.

x

y

r =0.

  • r =0.
  • Points are scattered
  • Lack of strong stochastic

linear dependence

Correlation Coefficient

13

  • r =0.
  • High degree of stochastic

dependence

  • Even though the

dependence is non linear,

a high correlation

coefficient can result

x

y

r =0.

  • r =
  • Although X and Y are

functionally related

x

y

r =

Correlation Coefficient

Consider Y = aX+b ; perfect linear relation

Substitute Y = aX+b

15

,

,

X Y

X Y

X Y

σ

ρ

σ σ

=

( )

( [^ ]^ [^ ]^ [^ ])

[ ] [ ] ( )

2

, 2

, (^2 )

2

2 2

2

2

2 2

X Y

X Y

X Y

X Y

X Y

E XY E X E Y

E aX bX E X E aX b

σ

ρ

σ σ

σ σ

σ σ

=

=

⎡ (^) + ⎤− +

⎣ ⎦

=

Correlation Coefficient

ρ = +1 if there is a perfect relationship in between X and Y

Correlation coefficient is a measure of linear dependence

16

[ ] { [ ]} [ ]

{ [^ ]}

2 2 2

2 2

2 2 2 2

2 2

2 2 2

2 2 2

1

X Y

X Y

X

X X

aE X bE X a E X bE X

a E X E X

a

a

σ σ

σ σ

σ

σ σ

⎡ ⎤ (^) + − −

⎣ ⎦

=

⎡ ⎤ (^) −

⎣ ⎦

=

= =

2 2 2

Y X

Q σ = a σ

Example-3 (contd.)

18

Mean,

Therefore mean, = 1627/

= 108.5 cm

Variance,

Standard deviation, s x

= 15.811 cm

1

n

i

i

x

x

n

=

=

1

n

i

i

x

=

x

( )

2

(^2 )

250

1 15 1

n

i

i

x

x x

s

n

=

= = =

− −

Example-3 (contd.)

19

Mean,

Therefore mean, = 575/

= 38.33 cm

Variance,

Standard deviation, s y

= 10.841 cm

1

n

i

i

y

y

n

=

=

1

n

i

i

y

=

y

( )

2

(^2 )

1 15 1

n

i

i

y

y y

s

n

=

= = =

− −

Example-3 (contd.)

21

Correlation coefficient,

( )( )

1

,

1

15 1

n

i i

i

X Y

x x y y

s

n

=

− −

=

=

=

,

,

15.811 10.

X Y

X Y

X Y

s

r

s s

=

=

×

=

Simple Linear Regression

(x i

, y i

) are observed values

is predicted value of x i

Error,

Sum of square errors

22

ˆ i

y x

y

Best fit line

ˆ i i

y = a + bx

ˆ

i i i

e = yy

( )

{ (^ )}

2 2

1 1

2

1

ˆ

n n

i i i

i i

n

i i

i

e y y

M y a bx

= =

=

= −

= − +

∑ ∑

Estimate the parameters a, b such that

the square error is minimum

i

y

i

y