Thursday, January 4, 2018
Fitting a Model by Maximum Likelihood
Fitting a Model by Maximum Likelihood
Fitting a Model by Maximum Likelihood
Maximum Likehood Estimation (MLE) is a statistical technique for estimating model parameters. It basically sets out to answer the question: what model parameters are most likely to characterise a given set of data? First you need to select a model for the data. And the model must have one or more (unknown) parameters. As the name implies, MLE proceeds to maximise a likelihood function, which in turn maximises the agreement between the model and the data.
Most illustrative examples of MLE aim to derive the parameters for a probability density function (PDF) of a particular distribution. In this case the likelihood function is obtained by considering the PDF not as a function of the sample variable, but as a function of distribution�s parameters. For each data point one then has a function of the distribution�s parameters. The joint likelihood of the full data set is the product of these functions. This product is generally very small indeed, so the likelihood function is normally replaced by a log-likelihood function. Maximising either the likelihood or log-likelihood function yields the same results, but the latter is just a little more tractable!
Fitting a Normal Distribution
Let�s illustrate with a simple example: fitting a normal distribution. First we generate some data.
> set.seed (1001) > > N <- 100 > > x <- rnorm (N, mean = 3, sd = 2) > > mean (x) [1] 2.998305 > sd (x) [1] 2.288979 |
Then we formulate the log-likelihood function.
link download
Subscribe to:
Post Comments (Atom)
|
No comments:
Post a Comment