# Bati's eHome of Tech

:: :: :: :: :: :: :: ::
 174 随笔 :: 0 文章 :: 33 评论 :: 0 引用

1. In statistics, a statistic is sufficient for the parameter θ, which indexes the distribution family of the data, precisely when the data's conditional probability distribution, given the statistic's value, no longer depends on θ. P(x|t,θ) = P(x|t)
2. Suppose one has samples from a distribution, does not know exactly what that distribution is, but does know that it comes from a certain set of distributions that is determined partly or wholly by a certain parameter, q. A statistic is sufficient for inference about q if and only if the values of any sample from that distribution give no more information about q than does the value of the statistic on that sample. E.g. if we know that a distribution is normal with variance 1 but has an unknown mean, the sample average is a sufficient statistic for the mean.
3. Sufficient statistics have many uses in statistical inference problems. In hypothesis testing, the Likelihood Ratio Test can often be reduced to a sufficient statistic of the data. In parameter estimation, the Minimum Variance Unbiased Estimator of a parameter θ can be characterized by sufficient statistics and the Rao-Blackwell Theorem.  Minimal sufficient statistics are, roughly speaking, sufficient statistics that cannot be compressed any more without losing information about the unknown parameter. Completeness is a technical characterization of sufficient statistics that allows one to prove minimality. These topics are covered in detail in this module. Further examples of sufficient statistics may be found in the module on the Fisher-Neyman Factorization Theorem

[2] L. Scharf. (1991). Statistical Signal Processing. Addison-Wesley.

posted on 2008-10-29 15:25  Bati  阅读(10849)  评论(2编辑  收藏