CCJ PRML Study Note - Chapter 1 Summary : MLE (Maximum-likelihood Estimate) and Bayesian Approach

Chapter 1 Summary : MLE (Maximum-likelihood Estimate) and Bayesian Approach

 
 

Chapter 1 Summary : MLE (Maximum-likelihood Estimate) and Bayesian Approach

 

Christopher M. Bishop, PRML, Chapter 1 Introdcution

1. Notations and Logical Relation

  • Training data: input values and their corresponding target values . For simplicity, written as .
  • Goal of Making Prediction: to be able to make predictions for the target variable given some new value of the input variable .
  • Assumption of the predictive distribution over : we shall assume that, given the value of , the corresponding value of has a Gaussian distribution with a mean equal to the value y(x, w) of the polynomial curve given by (1.1). Thus we have Alt text|center
  • Likelihood function of i.i.d. training data : Alt text|center Alt text|center
  • MLE of parameters and :
    • for linear regression
    • : Alt text|center
  • ML plugin prediction for new values of : substituting the maximum likelihood parameters into (1.60) to give

Alt text|center

  • Prior distribution over : For simplicity, let us consider a Gaussian distribution of the form Alt text|center where
    • hyperparameter is the precision of the distribution,
    • M +1 is the total number of elements in the vector for an order polynomial.
  • Posterior distribution for : using Bayes’ Theorem, Alt text|center
  • MAP: a step towards a more Bayesian approach, note MAP is still a point estimate. We find that the maximum of the posterior is given by the minimum ofAlt text|center

Although we have included a prior distribution , we are so far still making a point estimate of and so this does not yet amount to a Bayesian treatment. In a fully Bayesian approach, we should consistently apply the sum and product rules of probability, which requires, as we shall see shortly, that we integrate over all values of w. Such marginalizations lie at the heart of Bayesian methods for pattern recognition.

  • Fully Bayesian approach:
    • Here we shall assume that the parameters and are fixed and known in advance (in later chapters we shall discuss how such parameters can be inferred from data in a Bayesian setting).
    • A Bayesian treatment simply corresponds to a consistent application of the sum and product rules of probability, which allow the predictive distribution to be written in the form
  • Result of Integration in (1.68):
    • (1.66): this posterior distribution is a Gaussian and can be evaluated analytically.
    • (1.68) can also be performed analytically with the result that the predictive distribution is given by a Gaussian of the form Alt text|center where the mean and variance are given by Alt text|center Here the matrix S is given by Alt text|center where is the unit matrix, and we have defined the vector with elements for .

2. Flowchart

The relation between all of those equations or notions above:

Alt text|center

 
posted @ 2016-06-21 01:20  GloryOfFamily  阅读(537)  评论(0编辑  收藏  举报